Often, in the real world, entities have two or more representations in databases. Duplicate records do not share a common key and/or they contain errors that make duplicate matching a difficult task. Errors are introduced as the result of transcription errors, incomplete information, lack of standard formats or any combination of these factors. In this article, we present a thorough analysis of the literature on duplicate record detection. We cover similarity metrics that are commonly used to detect similar field entries, and we present an extensive set of duplicate detection algorithms that can detect approximately duplicate records in a database. We also cover multiple techniques for improving the efficiency and scalability of approximate duplicate detection algorithms. We conclude with a coverage of existing tools and with a brief discussion of the big open problems in the area.
Introduction
editDatabases play an important role in today's IT based economy. Many industries and systems depend on the accuracy of databases to carry out operations. Therefore, the quality of the information (or the lack thereof) stored in the databases, can have significant cost implications to a system that relies on information to function and conduct business. In an error-free system with perfectly clean data, the construction of a comprehensive view of the data consists of linking --in relational terms, joining-- two or more tables on their key fields. Unfortunately, data often lack a unique, global identifier that would permit such an operation. Furthermore, the data are neither carefully controlled for quality nor defined in a consistent way across different data sources. Thus, data quality is often compromised by many factors, including data entry errors (e.g., Microsft instead of Microsoft), missing integrity constraints (e.g., allowing entries such as EmployeeAge=567), and multiple conventions for recording information (e.g., 44 W. 4th St. versus 44 West Fourth Street). To make things worse, in independently managed databases not only the values, but the structure, semantics and underlying assumptions about the data may differ as well.
Often, while integrating data from different sources to implement a data warehouse, organizations become aware of potential systematic differences or conflicts. Such problems fall under the umbrella-term data heterogeneity[1]. Data cleaning[2], or data scrubbing[3], refer to the process of resolving such identification problems in the data. We distinguish between two types of data heterogeneity: structural and lexical. Structural heterogeneity occurs when the fields of the tuples in the database are structured differently in different databases. For example, in one database, the customer address might be recorded in one field named, say, addr, while in another database the same information might be stored in multiple fields such as street, city, state, and zipcode. Lexical heterogeneity occurs when the tuples have identically structured fields across databases, but the data use different representations to refer to the same real-world object (e.g., StreetAddress=44 W. 4th St. versus StreetAddress=44 West Fourth Street).
In this paper, we focus on the problem of lexical heterogeneity and survey various techniques which have been developed for addressing this problem. We focus on the case where the input is a set of structured and properly segmented records, i.e., we focus mainly on cases of database records. Hence, we do not cover solutions for the various other problems, such that of mirror detection, in which the goal is to detect similar or identical web pages (e.g., see[4][5]). Also, we do not cover solutions for problems such as anaphora resolution[6], in which the problem is to locate different mentions of the same entity in free text (e.g., that the phrase "President of the U.S." refers to the same entity as "George W. Bush"). We should note that the algorithms developed for mirror detection or for anaphora resolution are often applicable for the task of duplicate detection. Techniques for mirror detection have been used for detection of duplicate database records (see, for example, Set joins and techniques for anaphora resolution are commonly used as an integral part of deduplication in relations that are extracted from free text using information extraction systems[7].
The problem that we study has been known for more than five decades as the record linkage or the record matching problem[8][9][10][11][12][13] in the statistics community. The goal of record matching is to identify records in the same or different databases that refer to the same real-world entity, even if the records are not identical. In slightly ironic fashion, the same problem has multiple names across research communities. In the database community, the problem is described as merge-purge[14], data deduplication[15], and instance identification[16]; in the AI community, the same problem is described as database hardening[17] and name matching[18]. The names coreference resolution, identity uncertainty, and duplicate detection are also commonly used to refer to the same task. We will use the term duplicate record detection in this paper.
The remaining part of this paper is organized as follows: In Data preparation, we briefly discuss the necessary steps in the data cleaning process, before the duplicate record detection phase. Then, Field matching describes techniques used to match individual fields, and Record matching presents techniques for matching records that contain multiple fields. Efficiency describes methods for improving the efficiency of the duplicate record detection process and Tools presents a few commercial, off-the-shelf tools used in industry for duplicate record detection and for evaluating the initial quality of the data and of the matched records. Finally, Conclusions concludes the paper and discusses interesting directions for future research.
Data Preparation
editDuplicate record detection is the process of identifying different or multiple records that refer to one unique real-world entity or object. Typically, the process of duplicate detection is preceded by a data preparation stage, during which data entries are stored in a uniform manner in the database, resolving (at least partially) the structural heterogeneity problem. The data preparation stage includes a parsing, a data transformation, and a standardization step. The approaches that deal with data preparation are also described under the using the term ETL (Extraction, Transformation, Loading)[19]. These steps improve the quality of the in-flow data and make the data comparable and more usable. While data preparation is not the focus of this survey, for completeness we describe briefly the tasks performed in that stage. A comprehensive collection of papers related to various data transformation approaches can be found in[20].
Parsing is the first critical component in the data preparation stage. Parsing locates, identifies and isolates individual data elements in the source files. Parsing makes it easier to correct, standardize, and match data because it allows the comparison of individual components, rather than of long complex strings of data. For example, the appropriate parsing of name and address components into consistent packets of information is a crucial part in the data cleaning process. Multiple parsing methods have been proposed recently in the literature (e.g.,[21][22][23][24][25]) and the area continues to be an active field of research.
Data transformation refers to simple conversions that can be applied to the data in order for them to conform to the data types of their corresponding domains. In other words, this type of conversion focuses on manipulating one field at a time, without taking into account the values in related fields. The most common form of a simple transformation is the conversion of a data element from one data type to another. Such a data type conversion is usually required when a legacy or parent application stored data in a data type that makes sense within the context of the original application, but not in a newly developed or subsequent system. Renaming of a field from one name to another is considered data transformation as well. Encoded values in operational systems and in external data is another problem that is addressed at this stage. These values should be converted to their decoded equivalents, so records from different sources can be compared in a uniform manner. Range checking is yet another kind of data transformation which involves examining data in a field to ensure that it falls within the expected range, usually a numeric or date range. Lastly, dependency checking is slightly more involved since it requires comparing the value in a particular field to the values in another field, to ensure a minimal level of consistency in the data.
Data standardization refers to the process of standardizing the information represented in certain fields to a specific content format. This is used for information that can be stored in many different ways in various data sources and must be converted to a uniform representation before the duplicate detection process starts. Without standardization, many duplicate entries could erroneously be designated as non-duplicates, based on the fact that common identifying information cannot be compared. One of the most common standardization applications involves address information. There is no one standardized way to capture addresses so the same address can be represented in many different ways. Address standardization locates (using various parsing techniques) components such as house numbers, street names, post office boxes, apartment numbers and rural routes, which are then recorded in the database using a standardized format (e.g., 44 West Fourth Street is stored as 44 W4th St.). Date and time formatting and name and title formatting pose other standardization difficulties in a database. Typically, when operational applications are designed and constructed, there is very little uniform handling of date and time formats across applications. Because most operational environments have many different formats for representing dates and times, there is a need to transform dates and times into a standardized format. Name standardization identifies components such as first names, last names, title and middle initials and records everything using some standardized convention. Data standardization is a rather inexpensive step that can lead to fast identification of duplicates. For example, if the only difference between two records is the differently recorded address (44 West Fourth Street vs. 44 W4th St.), then the data standardization step would make the two records identical, alleviating the need for more expensive approximate matching approaches, that we describe in the later sections.
After the data preparation phase, the data are typically stored in tables, having comparable fields. The next step is to identify which fields should be compared. For example, it would not be meaningful to compare the contents of the field LastName with the field Address. Perkowitz et al.[26] presented a supervised technique for understanding the "semantics" of the fields that are returned by web databases. The idea was that similar values (e.g. last names) tend to appear in similar fields. Hence, by observing value overlap across fields, it is possible to parse the results into fields and discover correspondences across fields at the same time. Dasu et al.[27] significantly extend this concept and extract a "signature" from each field in the database; this signature summarizes the content of each column in the database. Then, the signatures are used to identify fields with similar values, fields whose contents are subsets of other fields and so on.
Even after parsing, data standardization, and identification of similar fields, it is not trivial to match duplicate records. Misspellings and different conventions for recording the same information still result in different, multiple representations of a unique object in the database. In the next section, we describe techniques for measuring the similarity of individual fields, and later, in Record matching we describe techniques for measuring the similarity of entire records.
Field Matching Techniques
editOne of the most common sources of mismatches in database entries is the typographical variations of string data. Therefore, duplicate detection typically relies on string comparison techniques to deal with typographical variations. Multiple methods have been developed for this task, and each method works well for particular types of errors. While errors might appear in numeric fields as well, the related research is still in its infancy.
In this section, we describe techniques that have been applied for matching fields with string data, in the duplicate record detection context. We also review briefly some common approaches for dealing with errors in numeric data.
Character-based similarity metrics
editThe character-based similarity metrics are designed to handle well typographical errors. In this section, we cover the following similarity metrics:
- Edit distance,
- Affine gap distance,
- Smith-Waterman distance,
- Jaro distance metric, and
- -gram distance.
Edit distance:
editThe edit distance between two strings and is the minimum number of edit operations of single characters needed to transform the string into . There are three types of edit operations:
- Insert a character into the string,
- Delete a character from the string, and
- Replace one character with a different character.
In the simplest form, each edit operation has cost 1. This version of edit distance is also referred to as Levenshtein distance[28]. The basic dynamic programming algorithm[29] for computing the edit distance between two strings takes time for two strings of length and , respectively. Landau and Vishkin[30] presented an algorithm for detecting in whether two strings have edit distance less than . (Notice that if then by definition the two strings do not match within distance , so for the non-trivial case where .) Needleman and Wunsch[31] modified the original edit distance model, and allowed for different costs for different edit distance operations. (For example, the cost of replacing O with 0 might be smaller than the cost of replacing f with q.) Ristad and Yiannilos[32] presented a method for automatically determining such costs from a set of equivalent words that are written in different ways. The edit distance metrics work well for catching typographical errors, but they are typically ineffective for other types of mismatches.
Affine gap distance:
editThe edit distance metric described above does not work well when matching strings that have been truncated, or shortened (e.g., "John R. Smith" versus "Jonathan Richard Smith"). The affine gap distance metric[33] offers a solution to this problem by introducing two extra edit operations: open gap and extend gap. The cost of extending the gap is usually smaller than the cost of opening a gap, and this results in smaller cost penalties for gap mismatches than the equivalent cost under the edit distance metric. The algorithm for computing the affine gap distance requires time, when the maximum length of a gap . In the general case, the algorithm runs in approximately steps. Bilenko et al.[18], in a spirit similar to what Ristad and Yiannilos[32] proposed for edit distance, describe how to train an edit distance model with affine gaps.
Smith-Waterman distance:
editSmith and Waterman[34] described an extension of edit distance and affine gap distance, in which mismatches at the beginning and the end of strings have lower costs than mismatches in the middle. This metric allows for better local alignment of the strings (i.e., substring matching). Therefore, the strings "Prof. John R. Smith, University of Calgary" and "John R. Smith, Prof." can match within short distance using the Smith-Waterman distance, since the prefixes and suffixes are ignored. The distance between two strings can be computed using a dynamic programming technique, based on the Needleman and Wunsch algorithm[31]. The Smith and Waterman algorithm requires time and space for two strings of length and ; many improvements have been proposed (e.g., the BLAST algorithm[35] in the context of computational biology applications, the algorithms by Baeza-Yates and Gonnet[36], and the agrep tool by Wu and Manber[37]). Pinheiro and Sun[38] proposed a similar similarity measure, which tries to find the best character alignment for the two compared strings and , so that the number of character mismatches is minimized.
Jaro distance metric:
editJaro[39] introduced a string comparison algorithm that was mainly used for comparison of last and first names. The basic algorithm for computing the Jaro metric for two strings and includes the following steps:
- Compute the string lengths and ,
- Find the "common characters" in the two strings; common are all the characters and for which and .
- Find the number of transpositions ; the number of transpositions is computed as follows: We compare the th common character in with the th common character in . Each non-matching character is a transposition.
The Jaro comparison value is:
From the description of the Jaro algorithm, we can see that the Jaro algorithm requires time for two strings of length and , mainly due to the Step~2 that computes the "common characters" in the two strings. Winkler and Thibaudeau[40] modified the Jaro metric to give higher weight to prefix matches, since prefix matches are generally more important for surname matching.
-grams:
editThe -grams are short character substring1 of length of the database strings[41][42]. The intuition behind the use of -grams as a foundation for approximate string matching is that when two strings and are similar they share a large number of -grams in common. Given a string , its -grams are obtained by "sliding" a window of length over the characters of . Since -grams at the beginning and the end of the string can have fewer than characters from , the strings are conceptually extended by "padding" the beginning and the end of the string with occurrences of a special padding character, not in the original alphabet. With the appropriate use of hash-based indexes, the average time required for computing the -gram overlap between two strings and is . Letter -grams, including trigrams, bigrams, and/or unigrams, have been used in a variety of ways in text recognition and spelling correction[43]. One natural extension of -grams are the positional -grams[44], which also record the position of the -gram in the string. Gravano et al.[45][46] showed how to use positional -grams to locate efficiently similar strings within a relational database.
Token-based similarity metrics
editCharacter-based similarity metrics work well for typographical errors. However, it is often the case that typographical conventions lead to rearrangement of words (e.g., "John Smith" versus "Smith, John"). In such cases, character-level metrics fail to capture the similarity of the entities. Token-based metrics try to compensate for this problem.
Atomic strings:
editMonge and Elkan[47] proposed a basic algorithm for matching text fields based on atomic strings. An atomic string is a sequence of alphanumeric characters delimited by punctuation characters. Two atomic strings match if they are equal, or if one is the prefix of the other. Based on this algorithm, the similarity of two fields is the number of their matching atomic strings divided by their average number of atomic strings.
WHIRL:
editCohen[48] described a system named WHIRL that adopts from the information retrieval the cosine similarity combined with the tf.idf weighting scheme to compute the similarity of two fields. Cohen separates each string into words and each word is assigned a weight
where is the number of times that appears in the field and is , where is the number of records in the database that contain . The tf.idf weight for a word in a field is high if appears a large number of times in the field (large ) and is a sufficiently "rare" term in the database (large ). For example, for a collection of company names, relatively infrequent terms such as "AT&T" or "IBM" will have higher idf weights than more frequent terms such as "Inc." The cosine similarity of and is defined as
The cosine similarity metric works well for a large variety of entries, and is insensitive to the location of words, thus allowing natural word moves and swaps (e.g., "John Smith" is equivalent to "Smith, John"). Also, introduction of frequent words affects only minimally the similarity of the two strings due to the low idf weight of the frequent words. For example, "John Smith" and " Mr. John Smith" would have similarity close to one. Unfortunately, this similarity metric does not capture word spelling errors, especially if they are pervasive and affect many of the words in the strings. For example, the strings "Compter Science Department" and "Deprtment of Computer Scence" will have zero similarity under this metric. Bilenko et al.[18] suggest the SoftTF-IDF metric to solve this problem. In the SoftTF.IDF metric, pairs of tokens that are "similar"2 (and not necessarily identical) are also considered in the computation of the cosine similarity. However, the product of the weights for non-identical token pairs is multiplied by the the similarity of the token pair, which is less than one.
-grams with tf.idf:
editGravano et al.[49] extended the WHIRL system to handle spelling errors by using -grams, instead of words, as tokens. In this setting, a spelling error minimally affects the set of common -grams of two strings, so the two strings "Gteway Communications" and "Comunications Gateway" have high similarity under this metric, despite the block move and the spelling errors in both words. This metric handles the insertion and deletion of words nicely. The string "Gateway Communications" matches with high similarity the string "Communications Gateway International" since the -grams of the word "International" appear often in the relation and have low weight.
Phonetic similarity metrics
editCharacter-level and token-based similarity metrics focus on the string-based representation of the database records. However, strings may be phonetically similar even if they are not similar in a character or token level. For example the word Kageonne is phonetically similar to Cajun despite the fact that the string representations are very different. The phonetic similarity metrics are trying to address such issues and match such strings.
Soundex:
editSoundex, invented by Russell[50][51], is the most common phonetic coding scheme. Soundex is based on the assignment of identical code digits to phonetically similar groups of consonants and is used mainly to match surnames. The rules of Soundex coding are as follows:
- Keep the first letter of the surname as the prefix letter and ignore completely all occurrences of W and H in other positions;
- Assign the following codes to the remaining letters:
- A, E, I, O, U and Y are not coded but serve as separators (see below);
- Consolidate sequences of identical codes by keeping only the first occurrence of the code;
- Drop the separators;
- Keep the letter prefix and the three first codes, padding with zeros if there are fewer than three codes.
Newcombe[10] reports that the Soundex code remains largely unchanged, exposing about two-thirds of the spelling variations observed in linked pairs of vital records, and that it sets aside only a small part of the total discriminating power of the full alphabetic surname. The code is designed primarily for Caucasian surnames, but works well for names of many different origins (such as those appearing on the records of the U.S. Immigration and Naturalization Service). However, when the names are of predominantly East Asian origin, this code is less satisfactory, because much of the discriminating power of these names resides in the vowel sounds, which the code ignores.
New York State Identification and Intelligence System (NYSIIS):
editThe NYSIIS system, proposed by Taft[52], differs from Soundex in that it retains information about the position of vowels in the encoded word by converting most vowels to the letter A. Furthermore, NYSIIS does not use numbers to replace letters; instead it replaces consonants with other, phonetically similar letters, thus returning a purely alpha code (no numeric component). Usually the NYSIIS code for a surname is based on a maximum of nine letters of the full alphabetical name, and the NYSIIS code itself is then limited to six characters. Tafts[52] compared Soundex with NYSIIS, using a name database of New York State, and concluded that NYSIIS is 98.72% accurate, while Soundex is 95.99% accurate for locating surnames. The NYSIIS encoding system is still used today from the New York State Division of Criminal Justice Services.
Oxford Name Compression Algorithm (ONCA):
editONCA[53] is a two-stage technique, designed to overcome most of the unsatisfactory features of pure Soundex-ing, retaining in parallel the convenient four-character fixed-length format. In the first step, ONCA uses a British version of the NYSIIS method of compression. Then, in the second step, the transformed and partially compressed name is Soundex-ed in the usual way. This two-stage technique has been used successfully for grouping similar names together.
Metaphone and Double Metaphone:
editPhilips[54] suggested the Metaphone algorithm as a better alternative to Soundex. Philips suggested using 16 consonant sounds that can describe a large number of sounds used in many English and non-English words. Double Metaphone[55] is a better version of Metaphone, improving some encoding choices made in the initial Metaphone and allowing multiple encodings for names that have various possible pronunciations. For such cases, all possible encodings are tested when trying to retrieve similar names. The introduction of multiple phonetic encodings greatly enhances the matching performance, with rather small overhead. Philips suggested that, at most, 10% of American surnames have multiple encodings.
Numeric Similarity Metrics
editWhile multiple methods exist for detecting similarities of string-based data, the methods for capturing similarities in numeric data are rather primitive. Typically, the numbers are treated as strings (and compared using the metrics described above) or simple range queries, which locate numbers with similar values. Koudas et al.[56] suggest, as direction for future research, consideration of the distribution and type of the numeric data, or extending the notion of cosine similarity for numeric data[57] to work well for duplicate detection purposes.
Concluding Remarks
editThe large number of field comparison metrics reflects the large number of errors or transformations that may occur in real-life data. Unfortunately, there are very few studies that compare the effectiveness of the various distance metrics presented here. Yancey[58] shows that the Jaro-Winkler metric works well for name matching tasks for data coming from U.S. census. A notable comparison effort is the work of Bilenko et al.[18], who compare the effectiveness of character-based and token-based similarity metrics. They show that the Monge-Elkan metric has the highest average performance across data sets and across character-based distance metrics. They also show that the SoftTF.IDF metric works better than any other metric. However, Bilenko et al. emphasize that no single metric is suitable for all data sets. Even metrics that demonstrate robust and high performance for some data sets can perform poorly on others. Hence, they advocate more flexible metrics that can accommodate multiple similarity comparisons (e.g.,[59][18]). In the next section we review such approaches.
Detecting Duplicate Records
editIn the previous section we described methods that can be used to match individual fields of a record. In most real-life situations, however, the records consist of multiple fields, making the duplicate detection problem much more complicated. In this section, we review methods that are used for matching records with multiple fields. The presented methods can be broadly divided into two categories:
- Approaches that rely on training data to ``learn how to match the records. This category includes (some) probabilistic approaches and supervised machine learning techniques.
- Approaches that rely on domain knowledge or on generic distance metrics to match records. This category includes approaches that use declarative languages for matching, and approaches that devise distance metrics appropriate for the duplicate detection task.
The rest of this section is organized as follows: initially, in Notation we describe the notation. In Probabilistic matching models we present probabilistic approaches for solving the duplicate detection problem. In Supervised learning we list approaches that use supervised machine learning techniques and in Active learning we describe variations based on active learning methods. Distance-based describes distance-based methods and Rule-based describes declarative techniques for duplicate detection. Finally, unsupervised learning covers unsupervised machine learning techniques, and concluding remarks provides some concluding remarks.
Notation
editWe use and to denote the tables that we want to match, and we assume, without loss of generality, that and have comparable fields. In the duplicate detection problem, each tuple pair , , is assigned to one of the two classes and . The class contains the record pairs that represent the same entity ("match}") and the class contains the record pairs that represent two different entities ("non-match").
We represent each tuple pair as a random vector with components that correspond to the comparable fields of and . Each shows the level of agreement of the th field for the records and . Many approaches use binary values for the 's and set if field agrees and let if field disagrees.
Probabilistic Matching Models
editNewcombe et al.[8] were the first to recognize duplicate detection as a Bayesian inference problem. Then, Fellegi and Sunter[12] formalized the intuition of Newcombe et al. and introduced the notation that we use, which is also commonly used in duplicate detection literature. The comparison vector is the input to a decision rule that assigns to or to . The main assumption is that is a random vector whose density function is different for each of the two classes. Then, if the density function for each class is known, the duplicate detection problem becomes a Bayesian inference problem. In the following sections, we will discuss various techniques that have been developed for addressing this (general) decision problem.
The Bayes Decision Rule for Minimum Error
editLet be a comparison vector, randomly drawn from the comparison space that corresponds to the record pair . The goal is to determine whether or . A decision rule, based simply on probabilities, can be written as follows:
This decision rule indicates that if the probability of the match class , given the comparison vector , is larger than the probability of the non-match class , then is classified to , and vice versa. By using the Bayes theorem, the previous decision rule may be expressed as:
The ratio
is called the likelihood ratio. The ratio denotes the threshold value of the likelihood ratio for the decision. We refer to this version of the decision rule as the Bayes test for minimum error. It can be easily shown[60] that the Bayes test results in the smallest probability of error, and it is in that respect an optimal classifier. Of course this holds only when the distributions of , and the priors and are known; this unfortunately is very rarely the case.
One common approach, usually called Naive Bayes, to compute the distributions of and is to make a conditional independence assumption, and postulate that the probabilities and are independent if . (Similarly, for and .) In that case, we have
The values of and can be computed using a training set of pre-labeled record pairs. However, the probabilistic model can also be used without using training data. Jaro[61] used a binary model for the values of (i.e., if the field "matches" , else ) and suggested using an expectation maximization (EM) algorithm[62] to compute the probabilities . The probabilities can be estimated by taking random pairs of records (which are with high probability in ).
When the conditional independence is not a reasonable assumption, then Winkler[63] suggested using the general expectation maximization algorithm to estimate , . In[64], Winkler claims that the general, unsupervised EM algorithm works well under five conditions:
- the data contain a relatively large percentage of matches (more than 5%),
- the matching pairs are "well-separated" from the other classes,
- the rate of typographical errors is low,
- there are sufficiently many redundant identifiers to overcome errors in other fields of the record, and
- the estimates computed under the conditional independence assumption result in good classification performance.
Winkler[64] shows how to relax the assumptions above (including the conditional independence assumption) and still get good matching results. Winkler shows that a semi-supervised model, which combines labeled and unlabeled data (similar to Nigam et al.[65]), performs better than purely unsupervised approaches. When no training data is available, unsupervised EM works well, even when a limited number of interactions is allowed between the variables. Interestingly, the results under the independence assumption are not considerably worse compared to the case in which the EM model allows variable interactions.
Du Bois[66] pointed out the importance of the fact that many times fields have missing (null) values and proposed a different method to correct mismatches that occur due to missing values. Du Bois suggested using a new comparison vector with dimension instead of the -dimensional comparison vector , such that
where
Using this representation, mismatches that occur due to missing data are typically discounted, resulting in improved duplicate detection performance. Du Bois proposed using an independence model to learn the distributions of and by using a set of pre-labeled training record pairs.
The Bayes Decision Rule for Minimum Cost
editOften, in practice, the minimization of the probability of error is not the best criterion for creating decision rules, as the misclassifications of and samples may have different consequences. Therefore, it is appropriate to assign a cost to each situation, which is the cost of deciding that belongs to the class when actually belongs to the class . Then, the expected costs and of deciding that belongs to the class and , respectively, are:
In that case, the decision rule for assigning to becomes:
It can be easily proved[67] that the minimum cost decision rule for the problem can be stated as:
Comparing the minimum error and minimum cost decision rule, we notice that the two decision rules become the same for the special setting of the cost functions to . In this case, the cost functions are termed symmetrical. For a symmetrical cost function, the cost becomes the probability of error and the Bayes test for minimum cost specifically addresses and minimizes this error.
Decision with a Reject Region
editUsing the Bayes Decision rule when the distribution parameters are known leads to optimal results. However, even in an ideal scenario, when the likelihood ratio is close to the threshold, the error (or cost) of any decision is high[67]. Based on this well-known and general idea in decision theory, Fellegi and Sunter[12], suggested adding an extra "reject" class in addition to the classes and . The reject class contained record pairs for which it is not possible to make any definite inference, and a "clerical review" is necessary. These pairs are examined manually by experts to decide whether they are true matches or not. By setting thresholds for the conditional error on , we can define the reject region and the reject probability, which measure the probability of directing a record pair to an expert for review.
Tepping[11] was the first to suggest a solution methodology focusing on the costs of the decision. He presented a graphical approach for estimating the likelihood thresholds. Verykios et al.[68] developed a formal framework for the cost-based approach taken by Tepping which shows how to compute the thresholds for the three decision areas when the costs and the priors and are known.
The "reject region" approach can be easily extended to a larger number of decision areas[69]. The main problem with such a generalization is appropriately ordering the thresholds which determine the regions in a way that no region disappears.
Supervised and Semi-Supervised Learning
editThe probabilistic model uses a Bayesian approach to classify record pairs into two classes, and . This model was widely used for duplicate detection tasks, usually as an application of the Fellegi-Sunter model. While the Fellegi-Sunter approach dominated the field for more than two decades, the development of new classification techniques in the machine learning and statistics communities prompted the development of new deduplication techniques. The supervised learning systems rely on the existence of training data in the form of record pairs, pre-labeled as matching or not.
One set of supervised learning techniques treat each record pair independently, similarly to the probabilistic techniques of probabilistic matching models. Cochinwala et al.[70] used the well-known CART algorithm[71], which generates classification and regression trees, a linear discriminant algorithm[60], which generates linear combination of the parameters for separating the data according to their classes, and a "vector quantization" approach, which is a generalization of nearest neighbor algorithms. The experiments which were conducted indicate that CART has the smallest error percentage. Bilenko et al.[18] use SVMlight[72] to learn how to merge the matching results for the individual fields of the records. Bilenko et al. showed that the SVM approach usually outperforms simpler approaches, such as treating the whole record as one large field. A typical post-processing step for these techniques (including the probabilistic techniques of probabilstic matching models is to construct a graph for all the records in the database, linking together the matching records. Then, using the transitivity assumption, all the records that belong to the same connected component are considered identical[73].
The transitivity assumption can sometimes result in inconsistent decisions. For example, and can be considered matches, but not. Partitioning such "inconsistent" graphs with the goal of minimizing inconsistencies is an NP-complete problem[74]. Bansal et al.[74] propose a polynomial approximation algorithm that can partition such a graph, identifying automatically the clusters and the number of clusters in the dataset. Cohen and Richman[75] proposed a supervised approach in which the system learns from training data how to cluster together records that refer to the same real-world entry. The main contribution of this approach is the adaptive distance function which is learned from a given set of training examples. McCallum and Wellner[76] learn the clustering method using training data; their technique is equivalent to a graph partitioning technique that tries to find the min-cut and the appropriate number of clusters for the given data set, similarly to the work of Bansal et al.[74].
The supervised clustering techniques described above have records as nodes for the graph. Singla and Domingos[77] observed that by using attribute values as nodes, it is possible to propagate information across nodes and improve duplicate record detection. For example, if the records and are deemed equal, then and are also equal, and this information can be useful for other record comparisons. The underlying assumption is that the only differences are due to different representations of the same entity (e.g., "Google" and "Google Inc.") and that there is no erroneous information in the attribute values (e.g., by mistake someone entering as the location of Google headquarters). Pasula et al.[78] propose a semi-supervised probabilistic relational model that can handle a generic set of transformations. While the model can handle a large number of duplicate detection problems, the use of exact inference results in a computationally intractable model. Pasula et al. propose to use a Markov Chain Monte Carlo (MCMC) sampling algorithm to avoid the intractability issue. However, it is unclear whether techniques that rely on graph-based probabilistic inference can scale well for data sets with hundreds of thousands of records.
Active-Learning-Based Techniques
editOne of the problems with the supervised learning techniques is the requirement for a large number of training examples. While it is easy to create a large number of training pairs that are either clearly non-duplicates or clearly duplicates, it is very difficult to generate ambiguous cases that would help create a highly accurate classifier. Based on this observation, some duplicate detection systems used active learning techniques[79] to automatically locate such ambiguous pairs. Unlike an "ordinary" learner that is trained using a static training set, an "active" learner actively picks subsets of instances from unlabeled data, which, when labeled, will provide the highest information gain to the learner.
Sarawagi and Bhamidipaty[15] designed ALIAS, a learning based duplicate detection system, that uses the idea of a "reject region" (see Reject region) to significantly reduce the size of the training set. The main idea behind ALIAS is that most duplicate and non-duplicate pairs are clearly distinct. For such pairs, the system can automatically categorize them in and without the need of manual labeling. ALIAS requires humans to label pairs only for cases where the uncertainty is high. This is similar to the "reject region" in the Fellegi and Sunter model, which marked ambiguous cases as cases for clerical review.
ALIAS starts with small subsets of pairs of records designed for training, which have been characterized as either matched or unique. This initial set of labeled data forms the training data for a preliminary classifier. In the sequel, the initial classifier is used for predicting the status of unlabeled pairs of records. The initial classifier will make clear determinations on some unlabeled instances but lack determination on most. The goal is to seek out from the unlabeled data pool those instances which, when labeled, will improve the accuracy of the classifier at the fastest possible rate. Pairs whose status is difficult to determine serve to strengthen the integrity of the learner. Conversely, instances in which the learner can easily predict the status of the pairs do not have much effect on the learner. Using this technique, ALIAS can quickly learn the peculiarities of a data set and rapidly detect duplicates using only a small number of training data.
Tejada et al.[80][59] used a similar strategy and employed decision trees to teach rules for matching records with multiple fields. Their method suggested that by creating multiple classifiers, trained using slightly different data or parameters, it is possible to detect ambiguous cases and then ask the user for feedback. The key innovation in this work is the creation of several redundant functions and the concurrent exploitation of their conflicting actions in order to discover new kinds of inconsistencies among duplicates in the data set.
Distance-Based Techniques
editEven active learning techniques require some training data or some human effort to create the matching models. In the absence of such training data or ability to get human input, supervised and active learning techniques are not appropriate. One way of avoiding the need for training data is to define a distance metric for records, which does not need tuning through training data. Using the distance metric and an appropriate matching threshold, it is possible to match similar records, without the need for training.
One approach is to treat a record as a long field, and use one of the distance metrics described in field matching to determine which records are similar. Monge and Elkan[47][73] proposed a string matching algorithm for detecting highly similar database records. The basic idea was to apply a general purpose field matching algorithm, especially one that is able to account for gaps in the strings, to play the role of the duplicate detection algorithm. Similarly, Cohen[81] suggested to use the tf.idf weighting scheme (see token-based similarity metrics), together with the cosine similarity metric to measure the similarity of records. Koudas et al.[56] presented some practical solutions to problems encountered during the deployment of such a string-based duplicate detection system at AT&T.
Distance-based approaches that conflate each record in one big field may ignore important information that can be used for duplicate detection. A simple approach is to measure the distance between individual fields, using the appropriate distance metric for each field, and then compute the weighted distance[82] between the records. In this case, the problem is the computation of the weights, and the overall setting becomes very similar to the probabilistic setting that we discussed in probabilistic matching models. An alternative approach, proposed by Guha et al.[83] is to create a distance metric that is based on ranked list merging. The basic idea is that if we compare only one field from the record, the matching algorithm can easily find the best matches and rank them according to their similarity, putting the best matches first. By applying the same principle for all the fields, we can get, for each record, ranked lists of records, one for each field. Then, the goal is to create a rank of records that has the minimum aggregate rank distance when compared to all the lists. Guha et al. map the problem into the minimum cost perfect matching problem, and develop then efficient solutions for identifying the top- matching records. The first solution is based on the Hungarian Algorithm[84], a graph-theoretic algorithm that solves the minimum cost perfect matching problem. Guha et al.\ also present the Successive Shortest Paths algorithm that works well for smaller values of and is based on the idea that it is not required to examine all potential matches to identify the top- matches. Both of the proposed algorithms are implemented in T-SQL and are directly deployable over existing relational databases.
The distance-based techniques described so far, treat each record as a flat entity, ignoring the fact that data is often stored in relational databases, in multiple tables. Ananthakrishna et al.[85] describe a similarity metric that uses not only the textual similarity, but the "co-occurrence" similarity of two entries in a database. For example, the entries in the state column "CA" and "California" have small textual similarity; however, the city entries "San Francisco," "Los Angeles," "San Diego" and so on, often have foreign keys that point both to "CA" and "California." Therefore, it is possible to infer that "CA" and "California" are equivalent. Ananthakrishna et al. show that by using "foreign key co-occurrence" information, they can substantially improve the quality of duplicate detection in databases that use multiple tables to store the entries of a record. This approach is conceptually similar to the work of Perkowitz et al.[26] and of Dasu et al.[27], which examine the contents of fields to locate the matching fields across two tables (see data preparation}).
Finally, one of the problems of the distance-based techniques is the need to define the appropriate value for the matching threshold. In the presence of training data, it is possible to find the appropriate threshold value. However, this would nullify the major advantage of distance-based techniques, which is the ability to operate without training data. Recently, Chaudhuri et al.[86] proposed a new framework for distance-based duplicate detection, observing that the distance thresholds for detecting real duplicate entries is different from each database tuple. To detect the appropriate threshold, Chaudhuri et al.\ observed that entries that correspond to the same real-world object but have different representation in the database, tend to (1) have small distances from each other (compact set property), and to (2) have only a small number of other neighbors within a small distance (sparse neighborhood property). Furthermore, Chaudhuri et al.\ propose an efficient algorithm for computing the required threshold for each object in the database, and show that the quality of the results outperforms approaches that rely on a single, global threshold.
Rule-based Approaches
editA special case of distance-based approaches is the use of rules to define whether two records are the same or not. Rule-based approaches can be considered as distance-based techniques, where the distance of two records is either 0 or 1. Wang and Madnick[16] proposed a rule-based approach for the duplicate detection problem. For cases in which there is no global key, Wang and Madnick suggest the use of rules developed by experts to derive a set of attributes that collectively serve as a "key" for each record. For example, an expert might define rules such as
|
|
By using such rules, Wang and Madnick hoped to generate unique keys that can cluster multiple records that represent the same real-world entity. Lim et al.[87] also used a rule-based approach, but with the extra restriction that the result of the rules must always be correct. Therefore, the rules should not be heuristically-defined but should reflect absolute truths and serve as functional dependencies.
Hern\'andez and Stolfo[14] further developed this idea and derived an equational theory that dictates the logic of domain equivalence. This equational theory specifies an inference about the similarity of the records. For example, if two persons have similar name spellings, and these persons have the same address, we may infer that they are the same person. Specifying such an inference in the equational theory requires declarative rule language. For example, the following is a rule that exemplifies one axiom of the equational theory developed for an employee database:
|
Note that "similar to" is measured by one of the string comparison techniques (field matching), and "matches" means to declare that those two records are matched and therefore represent the same person.
AJAX[88] is a prototype system that provides a declarative language for specifying data cleaning programs, consisting of SQL statements enhanced with a set of primitive operations to express various cleaning transformations. AJAX provides a framework wherein the logic of a data cleaning program is modeled as a directed graph of data transformations starting from some input source data. Four types of data transformations are provided to the user of the system. The mapping transformation standardizes data, the matching transformation finds pairs of records that probably refer to the same real object, the clustering transformation groups together matching pairs with a high similarity value, and finally, the merging transformation collapses each individual cluster into a tuple of the resulting data source.
It is noteworthy that such rule-based approaches, which require a human expert to devise meticulously crafted matching rules, typically result in systems with high accuracy. However, the required tuning requires extremely high manual effort from the human experts, and this effort makes the deployment of such systems difficult in practice. Currently, the typical approach is to use a system that generates matching rules from training data (see supervised learning and~\ref{sec:activelearning}) and then manually tune the automatically generated rules.
Unsupervised Learning
editAs we mentioned earlier, the comparison space consists of comparison vectors which contain information about the differences between fields in a pair of records. Unless some information exists about which comparison vectors correspond to which category (match, non-match, or possible-match), the labeling of the comparison vectors in the training data set should be done manually. One way to avoid manual labeling of the comparison vectors is to use clustering algorithms, and group together similar comparison vectors. The idea behind most unsupervised learning approaches for duplicate detection is that similar comparison vectors correspond to the same class.
The idea of unsupervised learning for duplicate detection has its roots in the probabilistic model proposed by Fellegi and Sunter (see probabilistic matching models). As we discussed in probabilistic matching models, when there are no training data to compute the probability estimates, it is possible to use variations of the Expectation Maximization algorithm to identify appropriate clusters in the data.
Verykios et al.[89] propose the use of a bootstrapping technique based on clustering to learn matching models. The basic idea, also known as co-training[90], is to use very few labeled data, and then use unsupervised learning techniques to label appropriately the data with unknown labels. Initially, Verykios et al. treat each entry of the comparison vector (which corresponds to the result of a field comparison) as a continuous, real variable. Then, they partition the comparison space into clusters by using the AutoClass[91] clustering tool. The basic premise is that each cluster contains comparison vectors with similar characteristics. Therefore all the record pairs in the cluster belong to the same class (matches, non-matches, or possible-matches). Thus, by knowing the real class of only a few vectors in each cluster, it is possible to infer the class of all vectors in the cluster, and therefore mark the corresponding record pairs as matches or not. Elfeky et al.[92] implemented this idea in TAILOR, a toolbox for detecting duplicate entries in data sets. Verykios et al.\ show that the classifiers generated using the new, larger training set have high accuracy, and require only a minimal number of pre-labeled record pairs.
Ravikumar and Cohen[93] follow a similar approach and propose a hierarchical, graphical model for learning to match record pairs. The foundation of this approach is to model each field of the comparison vector as a latent binary variable which shows whether the two fields match or not. The latent variable then defines two probability distributions for the values of the corresponding "observed" comparison variable. Ravikumar and Cohen show that it is easier to learn the parameters of a hierarchical model than to attempt to directly model the distributions of the real-valued comparison vectors. Bhattacharya and Getoor[94] propose to use the Latent Dirichlet Allocation generative model to perform duplicate detection. In this model, the latent variable is a unique identifier for each entity in the database.
Concluding Remarks
editThere are multiple techniques for duplicate record detection. We can divide the techniques into two broad categories: ad-hoc techniques that work quickly on existing relational databases, and more "principled" techniques that are based on probabilistic inference models. While probabilistic methods outperform ad-hoc techniques in terms of accuracy, the ad-hoc techniques work much faster and can scale to databases with hundreds of thousands of records. Probabilistic inference techniques are practical today only for data sets that are one or two orders of magnitude smaller than the data sets handled by ad-hoc techniques. A promising direction for future research is to devise techniques that can substantially improve the efficiency of approaches that rely on machine learning and probabilistic inference.
A question that is unlikely to be resolved soon is the question of which of the presented methods should be used for a given duplicate detection task. Unfortunately, there is no clear answer to this question. The duplicate record detection task is highly data-dependent and it is unclear if we will ever see a technique dominating all others across all data sets. The problem of choosing the best method for duplicate data detection is very similar to the problem of model selection and performance prediction for data mining: we expect that progress in that front will also benefit the task of selecting the best method for duplicate detection.
Improving the Efficiency of Duplicate Detection
editSo far, in our discussion of methods for detecting whether two records refer to the same real-world object, we have focused mainly on the quality of the comparison techniques and not on the efficiency of the duplicate detection process. Now, we turn to the central issue of improving the speed of duplicate detection.
An elementary technique for discovering matching entries in tables and is to execute a "nested-loop" comparison, i.e., to compare every record of table with every record in table . Unfortunately, such strategy requires a total of comparisons, a cost that is prohibitively expensive even for the moderately-sized tables. In reducing record comparisons we describe techniques that substantially reduce the number of required comparisons.
Another factor that can lead to increased computation expense is the cost required for a single comparison. It is not uncommon for a record to contain tens of fields. Therefore, each record comparison requires multiple field comparisons and each field comparison can be expensive. For example, computing the edit distance between two long strings and , respectively, has a cost of ; just checking if they are within a prespecified edit distance threshold can reduce the complexity to (see character-based similarity metrics). We examine some of the methods that can be used to reduce the cost of record comparison in improving efficiency of record comparison.
Reducing the Number of Record Comparisons
editBlocking
editOne "traditional" method for identifying identical records in a database table is to scan the table and compute the value of a hash function for each record. The value of the hash function defines the "bucket" to which this record is assigned. By definition, two records that are identical will be assigned to the same bucket. Therefore, in order to locate duplicates, it is enough to compare only the records that fall into the same bucket for matches. The hashing technique cannot be used directly for approximate duplicates, since there is no guarantee that the hash value of two similar records will be the same. However, there is an interesting counterpart of this method, named blocking.
As discussed above with relation to utilizing the hash function, blocking typically refers to the procedure of subdividing files into a set of mutually exclusive subsets (blocks) under the assumption that no matches occur across different blocks. A common approach to achieving these blocks is to use a function such as Soundex, NYSIIS, or Metaphone (see phonetic similarity metrics) on highly discriminating fields (e.g., last name) and compare only records that have similar, but not necessarily identical, fields.
Although blocking can increase substantially the speed of the comparison process, it can also lead to an increased number of false mismatches due to the failure of comparing records that do not agree on the blocking field. It can also lead to an increased number of missed matches due to errors in the blocking step that placed entries in the wrong buckets, thereby preventing them from being compared to actual matching entries. One alternative is to execute the duplicate detection algorithm in multiple runs, each time using a different field for blocking. This approach can reduce substantially the probability of false mismatches, with a relatively small increase in the running time.
Sorted Neighborhood Approach
editHernáandez and Stolfo[14] describe the so-called sorted neighborhood approach. The method consists of the following three steps:
- Create key: A key for each record in the list is computed by extracting relevant fields or portions of fields.
- Sort data: The records in the database are sorted by using the key found in the first step. A sorting key is defined to be a sequence of attributes, or a sequence of sub-strings within the attributes, chosen from the record in an ad hoc manner. Attributes that appear first in the key have a higher priority than those that appear subsequently.
- Merge: A fixed size window is moved through the sequential list of records in order to limit the comparisons for matching records to those records in the window. If the size of the window is records then every new record that enters that window is compared with the previous records to find "matching" records. The first record in the window slides out of it.
The sorted neighborhood approach relies on the assumption that duplicate records will be close in the sorted list, and therefore will be compared during the merge step. The effectiveness of the sorted neighborhood approach is highly dependent upon the comparison key that is selected to sort the records. In general, no single key will be sufficient to sort the records in such a way that all the matching records can be detected. If the error in a record occurs in the particular field or portion of the field that is the most important part of the sorting key, there is a very small possibility that the record will end up close to a matching record after sorting.
To increase the number of similar records merged, Hernáandez and Stolfo implemented a strategy for executing several independent runs of the sorted-neighborhood method (presented above) by using a different sorting key and a relatively small window each time. This strategy is called the multi-pass approach. This method is similar in spirit to the multiple-run blocking approach described above. Each independent run produces a set of pairs of records that can be merged. The final results, including the transitive closure of the records matched in different passes, is subsequently computed.
Clustering and Canopies
editMonge and Elkan[73] try to improve the performance of a basic "nested-loop" record comparison, by assuming that duplicate detection is transitive. This means that if is deemed duplicate of and is deemed duplicate of then and are also duplicates. Under the assumption of transitivity, the problem of matching records in a database can be described in terms of determining the connected components of an undirected graph. At any time, the connected components of the graph correspond to the transitive closure of the "record matches" relationships discovered so far. Monge and Elkan[73] use a union-find structure to efficiently compute the connected components of the graph. During the Union step, duplicate records are "merged" into a cluster and only a "representative" of the cluster is kept for subsequent comparisons. This reduces the total number of record comparisons, without reducing substantially the accuracy of the duplicate detection process. The concept behind this approach, is that if a record is not similar to a record already in the cluster, then it will not match the other members of the cluster either.
McCallum et al.[95] propose the use of canopies for speeding up the duplicate detection process. The basic idea is to use a cheap comparison metric to group records into overlapping clusters called canopies. (This is in contrast to blocking that requires hard, non-overlapping partitions.) After the first step, the records are then compared pairwise, using a more expensive similarity metric that leads to better qualitative results. The assumption behind this method is that there is an inexpensive similarity function that can be used as a "quick-and-dirty" approximation for another, more expensive function. For example, if two strings have length difference larger than 3, then their edit distance cannot be smaller than 3. In that case, the length comparison serves as a cheap (canopy) function for the more expensive edit distance. Cohen and Richman[75] propose the tf.idf similarity metric as a canopy distance, and then use multiple (expensive) similarity metrics to infer whether two records are duplicates. Gravano et al.[45] propose using the string lengths and the number of common -grams of two strings as canopies (filters according to[45]) for the edit distance metric, which is expensive to compute in a relational database. The advantage of this technique is that the canopy functions can be evaluated efficiently using vanilla SQL statements. In a similar fashion, Chaudhuri et al.[96] propose using an indexable canopy function for easily identifying similar tuples in a database. Baxter et al.[97] perform an experimental comparison of canopy-based approaches with traditional blocking and show that the flexible nature of canopies can significantly improve the quality and speed of duplicate detection.
Set Joins
editAnother direction towards efficiently implementing data cleaning operations is to speed-up the execution of set operations: large number of similarity metrics, discussed in field matching techniques, use set operations as part of the overall computation. Running set operations on all pair combinations is a computationally expensive operation and is typically unnecessary. For data cleaning applications, the interesting pairs are only those in which the similarity value is high. Many techniques use this property and suggest algorithms for fast computation of set-based operations on a set of records.
Cohen[81] proposed using a set of in-memory inverted indexes together with an search algorithm to locate the top- most similar pairs, according to the cosine similarity metric. Soffer et al.[98], mainly in the context of information retrieval, suggest pruning the inverted index, removing terms with low weights since they do not contribute much to the computation of the tf.idf cosine similarity. Gravano et al.[49] present an SQL-based approach that is analogous to the approach of Soffer et al.[98], and allows fast computation of cosine similarity within an RDBMS. Mamoulis[99] presents techniques for efficiently processing a set join in a database, focusing on the containment and non-zero-overlap operators. Mamoulis shows that inverted indexes are typically superior to approaches based on signature files, confirming earlier comparison studies[100]. Sarawagi and Kirpal[101] extend the set joins approach to a large number of similarity predicates that use set joins. The Probe-Cluster approach of Sarawagi and Kirpal works well in environments with limited main memory, and can be used to compute efficiently a large number of similarity predicates, in contrast to previous approaches which were tuned for a smaller number of similarity predicates (e.g., set containment, or cosine similarity). Furthermore, Probe-Cluster returns exact values for the similarity metrics, in contrast to previous approaches which used approximation techniques.
Improving the Efficiency of Record Comparison
editSo far, we have examined techniques that reduce the number of required record comparisons without compromising the quality of the duplicate detection process. Another way of improving the efficiency of duplicate detection is to improve the efficiency of a single record comparison. Next, we review some of these techniques.
When comparing two records, after having computed the differences of only a small portion of the fields of two records, it may be obvious that the pair does match, irrespective of the results of further comparison. Therefore, it is paramount to determine the field comparison for a pair of records as soon as possible to avoid wasting additional, valuable time. The field comparisons should be terminated when even complete agreement of all the remaining fields cannot reverse the unfavorable evidence for the matching of the records[13]. To make the early termination work, the global likelihood ratio for the full agreement of each of the identifiers should be calculated. At any given point in the comparison sequence, the maximum collective favorable evidence, which could be accumulated from that point forward, will indicate what improvement in the overall likelihood ratio might conceivably result if the comparisons were continued.
Verykios et al.[89] propose a set of techniques for reducing the complexity of record comparison. The first step is to apply a feature subset selection algorithm for reducing the dimensionality of the input set. By using a feature selection algorithm (e.g.,[102]) as a preprocessing step the record comparison process uses only a small subset of the record fields, which speeds up the comparison process. Additionally, the induced model can be generated in a reduced amount of time and is usually characterized by higher predictive accuracy. Verykios et al.[89] also suggest using a pruning technique on the derived decision trees that are used to classify record pairs as matches or mismatches. Pruning produces models (trees) of smaller size not only avoid over-fitting and have a higher accuracy, but also allow for faster execution of the matching algorithm.
Duplicate Detection Tools
editOver the past several years, a range of tools for cleaning data has appeared on the market and research groups have made available to the public software packages that can be used for duplicate record detection. In this section, we review such packages, focusing on tools that have open architecture and allow the users to understand the underlying mechanics of the matching mechanisms.
The Febrl system (Freely Extensible Biomedical Record Linkage) is an open-source data cleaning toolkit, and it has two main components: The first component deals with data standardization and the second performs the actual duplicate detection. The data standardization relies mainly on hidden-Markov models (HMMs); therefore, Febrl typically requires training to correctly parse the database entries. For duplicate detection, Febrl implements a variety of string similarity metrics, such as Jaro, edit distance, and q-gram distance (see field matching techniques). Finally, Febrl supports phonetic encoding (Soundex, NYSIIS, and Double Metaphone) to detect similar names. Since phonetic similarity is sensitive to errors in the first letter of a name, Febrl also computes phonetic similarity using the reversed version of the name string, sidestepping the "first-letter" sensitivity problem.
TAILOR[92] is a flexible record matching toolbox, which allows the users to apply different duplicate detection methods on the data sets. The flexibility of using multiple models is useful when the users do not know which duplicate detection model will perform most effectively on their particular data. TAILOR follows a layered design, separating comparison functions from the duplicate detection logic. Furthermore, the execution strategies, which improve the efficiency are implemented in a separate layer, making the system more extensible than systems that rely on monolithic designs. Finally, TAILOR reports statistics, such as estimated accuracy and completeness, which can help the users understand better the quality of the a given duplicate detection execution over a new data set.
WHIRL is a duplicate record detection system available for free for academic and research use. WHIRL uses the tf.idf token-based similarity metric to identify similar strings within two lists. The Flamingo Project is a similar tools that provides a simple string matching tool that takes as input two string lists and returns the strings pairs that are within a prespecified edit distance threshold. WizSame by WizSoft is also a product that allows the discovery of duplicate records in a database. The matching algorithm is very similar to SoftTF.IDF (see token-based similarity metrics): two records match if they contain a significant fraction of identical or similar words, where similar are the words that are within edit distance one.
BigMatch[103] is the duplicate detection program used by the U.S. Census Bureau. It relies on blocking strategies to identify potential matches between the records of two relations, and scales well for very large data sets. The only requirement is that one of the two relations should fit in memory, and it is possible to fit in memory even relations with 100 million records. The main goal of BigMatch is not to perform sophisticated duplicate detection, but rather to generate a set of candidate pairs that should be then processed by more sophisticated duplicate detection algorithms.
Finally, we should note that currently many database vendors (Oracle, IBM, Microsoft) do not provide sufficient tools for duplicate record detection. Most of the efforts until now has focused on creating easy-to-use ETL tools, that can standardize database records and fix minor errors, mainly in the context of address data. Another typical function of the tools that are provided today is the ability to use reference tables and standardize the representation of entities that are well-known to have multiple representations. (For example, "TKDE" is also frequently written as "IEEE TKDE" or as "Transactions on Knowledge and Data Engineering.") A recent, positive step is the existence of multiple data cleaning operators within Microsoft SQL Server Integration Services, which is part of Microsoft SQL Server 2005. For example, SQL server now includes the ability to perform "fuzzy matches" and implements "error-tolerable indexes" that allow fast execution of such approximate lookups. The adopted similarity metric is similar to SoftTF.IDF, described in token-based similarity metrics. Ideally, the other major database vendors would also follow suit and add similar capabilities and extend the current ETL packages.
Future Directions and Conclusions
editIn this survey, we have presented a comprehensive survey of the existing techniques used for detecting non-identical duplicate entries in database records. The interested reader may also want to read a complementary survey by Winkler[104] and the Special Issue of the IEEE Data Engineering Bulletin on Data Quality[105].
As database systems are becoming more and more commonplace, data cleaning is going to be the cornerstone for correcting errors in systems which are accumulating vast amounts of errors on a daily basis. Despite the breadth and depth of the presented techniques, we believe that there is still room for substantial improvements in the current state-of-the-art.
First of all, it is currently unclear which metrics and techniques are the current state-of-the-art. The lack of standardized, large scale benchmarking data sets can be a big obstacle for the further development of the field, as it is almost impossible to convincingly compare new techniques with existing ones. A repository of benchmark data sources with known and diverse characteristics should be made available to developers so they may evaluate their methods during the development process. Along with benchmark and evaluation data, various systems need some form of training data to produce the initial matching model. Although small data sets are available, we are not aware of large-scale, validated data sets that could be used as benchmarks. Winkler[106] highlights techniques on how to derive data sets that are properly anonymized and are still useful for duplicate record detection purposes.
Currently, there are two main approaches for duplicate record detection. Research in databases emphasizes relatively simple and fast duplicate detection techniques, that can be applied to databases with millions of records. Such techniques typically do not rely on the existence of training data, and emphasize efficiency over effectiveness. On the other hand, research in machine learning and statistics aims to develop more sophisticated matching techniques that rely on probabilistic models. An interesting direction for future research is to develop techniques that combine the best of both worlds.
Most of the duplicate detection systems available today offer various algorithmic approaches for speeding up the duplicate detection process. The changing nature of the duplicate detection process also requires adaptive methods that detect different patterns for duplicate detection and automatically adapt themselves over time. For example, a background process could monitor the current data, incoming data and any data sources that need to be merged or matched, and decide, based on the observed errors, whether a revision of the duplicate detection process is necessary or not. Another related aspect of this challenge is to develop methods that permit the user to derive the proportions of errors expected in data cleaning projects.
Finally, large amounts of structured information is now derived from unstructured text and from the web. This information is typically imprecise and noisy; duplicate record detection techniques are crucial for improving the quality of the extracted data. The increasing popularity of information extraction techniques is going to make this issue more prevalent in the future, highlighting the need to develop robust and scalable solutions. This only adds to the sentiment that more research is needed in the area of duplicate record detection and in the area of data cleaning and information quality in general.
Notes
edit^Note 1 : The -grams in our context are defined on the character level. In speech processing and in computational linguistics, researchers often use the term -gram, to refer to sequences of words.
^Note 2 : The token similarity is measured using a metric that works well for short strings, such as edit distance and Jaro.
References
edit- ^ Chatterjee, Abhirup (1991). "Data Manipulation in Heterogeneous Databases". ACM SIGMOD Record. 20 (4): 64–68. doi:10.1145/141356.141385.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Sarawagi, Sunita (2000). "Special Issue on Data Cleaning". IEEE Data Engineering Bulletin. Vol. 23.
- ^ Widom, Jennifer (1995). "Research problems in data warehousing". Proceedings of the 1995 ACM Conference on Information and Knowledge Management (CIKM'95). pp. 25–30.
- ^ Broder, Andrei Z. (1997). "Syntactic Clustering of the Web". Proceedings of the Sixth International World Wide Web Conference (WWW6). pp. 1157–1166.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Cho, Junghoo (2000). "Finding Replicated Web Collections". Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data (SIGMOD 2000). pp. 355–366.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Mitkov, Ruslan (aug 2002). Anaphora Resolution. Longman.
{{cite book}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ McCallum, Andrew (2005). "Information extraction: Distilling structured data from unstructured text". ACM Queue. 3 (9): 48–57. doi:10.1145/1105664.1105679.
- ^ a b Newcombe, Howard B. (oct 1959). "Automatic Linkage of Vital Records". Science. 130 (3381): 954–959. doi:10.1126/science.130.3381.954. PMID 14426783.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Newcombe, Howard B. (nov 1962). "Record Linkage: Making Maximum Use of the Discriminating Power of Identifying Information". Communications of the ACM. 5 (11): 563–566. doi:10.1145/368996.369026.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ a b Newcombe, Howard B. (may 1967). "Record Linking: The Design of Efficient Systems for Linking Records into Individual and Family Histories". American Journal of Human Genetics. 19 (3): 335–359.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ a b Tepping, Benjamin J. (dec 1968). "A Model for Optimum Linkage of Records". Journal of the American Statistical Association. 63 (345): 1321–1332. doi:10.1080/01621459.1968.10480930.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) Cite error: The named reference "jasa/tepping68" was defined multiple times with different content (see the help page). - ^ a b c Fellegi, Ivan Peter (dec 1969). "A theory for record linkage". Journal of the American Statistical Association. 64 (328): 1183–1210. doi:10.1080/01621459.1969.10501049.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ a b Newcombe, Howard B. (1988). Handbook of Record Linkage. Oxford University Press.
- ^ a b c Hernández, Mauricio Antonio (jan 1998). "Real-world Data is Dirty: Data Cleansing and The Merge/Purge Problem". Data Mining and Knowledge Discovery. 2 (1): 9–37. doi:10.1023/A:1009761603038.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) CS1 maint: multiple names: authors list (link) Cite error: The named reference "dmkd/hernandez98" was defined multiple times with different content (see the help page). - ^ a b Sarawagi, Sunita (2002). "Interactive Deduplication using Active Learning". Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2002). pp. 269–278.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b Wang, Y. Richard (1989). "The Inter-Database Instance Identification Problem in Integrating Autonomous Systems". Proceedings of the Fifth IEEE International Conference on Data Engineering (ICDE 1989). pp. 46–55.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Cohen, William W. (2000). "Hardening soft information sources". Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2000). pp. 255–259.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b c d e f Bilenko, Mikhail (2003). "Adaptive Name Matching in Information Integration". IEEE Intelligent Systems. 18 (5): 16–23. doi:10.1109/MIS.2003.1234765.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Kimball, Ralph (2004). The data warehouse ETL toolkit: Practical techniques for extracting, cleaning, conforming, and delivering data. John Wiley & Sons.
{{cite book}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Rundensteiner, Elke (1999). "Special Issue on Data Transformation". IEEE Data Engineering Bulletin. Vol. 22.
- ^ McCallum, Andrew (2000). "Maximum Entropy Markov Models for Information Extraction and Segmentation". Proceedings of the 17th International Conference on Machine Learning (ICML 2000). pp. 46–55.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Borkar, Vinayak R. (2001). "Automatic Segmentation of Text into Structured Records". Proceedings of the 2001 ACM SIGMOD International Conference on Management of Data (SIGMOD 2001). pp. 175–186.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Agichtein, Eugene (2004). "Mining reference tables for automatic text segmentation". Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2004). pp. 20–29.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Sutton, Charles (2004). "Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data". Proceedings of the 21st International Conference on Machine Learning (ICML 2004).
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Raman, Vijayshankar (2001). "Potter's Wheel: An Interactive Data Cleaning System". Proceedings of the 27th International Conference on Very Large Databases (VLDB 2001). pp. 381–390.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b Perkowitz, Mike (mar 1997). "Learning to Understand Information on the Internet: An Example-Based Approach". Journal of Intelligent Information Systems. 8 (2): 133–153. doi:10.1023/A:1008672508721.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ a b Dasu, Tamraparni (2002). "Mining database structure; or, how to build a data quality browser". Proceedings of the 2002 ACM SIGMOD International Conference on Management of Data (SIGMOD 2002). pp. 240–251.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Levenshtein, Vladimir I. (1965). "Binary Codes Capable of Correcting Deletions, Insertions and Reversals". Doklady Akademii Nauk SSSR. 163 (4): 845–848.
- ^ Landau, Gad M. (jun 1989). "Fast parallel and serial approximate string matching". Journal of Algorithms. 10 (2): 157–169. doi:10.1016/0196-6774(89)90010-2.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ a b Needleman, Saul Ben (mar 1970). "A general method applicable to the search for similarities in the amino acid sequence of two proteins". Journal of Molecular Biology. 48 (3): 443–453. doi:10.1016/0022-2836(70)90057-4. PMID 5420325.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ a b Ristad, Eric Sven (may 1998). "Learning String Edit Distance". IEEE Transactions on Pattern Analysis and Machine Intelligence. 20 (5): 522–532. doi:10.1109/34.682181.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Waterman, Michael S. (1976). "Some biological sequence metrics". Advances in Mathematics. 20 (4): 367–387. doi:10.1016/0001-8708(76)90202-4.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Smith, Temple F. (1981). "Identification of common molecular subsequences". Journal of Molecular Biology. 147 (1): 195–197. doi:10.1016/0022-2836(81)90087-5. PMID 7265238.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Altschula, Stephen F. (oct 1990). "Basic Local Alignment Search Tool". Journal of Molecular Biology. 215 (3): 403–410. doi:10.1016/S0022-2836(05)80360-2. PMID 2231712.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Baeza-Yates, Ricardo (oct 1992). "A new approach to text searching". Communications of the ACM. 35 (10): 74–82. doi:10.1145/135239.135243.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Wu, Sun (oct 1992). "Fast text searching allowing errors". Communications of the ACM. 35 (10): 83–91. doi:10.1145/135239.135244.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Pinheiro, Jose C. (1998). "Methods for Linking and Mining Heterogeneous Databases". Proceedings of the International Conference on Knowledge Discovery and Data Mining (KDD-98). pp. 309–313.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Jaro, Matthew A. (1976), UNIMATCH: A Record Linkage System: User's Manual, U.S. Bureau of the Census, Washington, D.C.
- ^ Winkler, William E. (1991), "An Application of the Fellegi-Sunter Model of Record Linkage to the 1990 U.S. Decennial Census", Statistical Research Report Series RR91/09, U.S. Bureau of the Census, Washington, D.C.
{{citation}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Ullmann, Julian R. (1977). "A Binary -Gram Technique for Automatic Correction of Substitution, Deletion, Insertion and Reversal Errors in Words". The Computer Journal. 20 (2): 141–147. doi:10.1093/comjnl/20.2.141.
- ^ Ukkonen, Esko (1992). "Approximate string matching with -grams and maximal matches". Theoretical Computer Science. 92 (1): 191–211. doi:10.1016/0304-3975(92)90143-4.
- ^ Kukich, Karen (1992). "Techniques for Automatically Correcting Words in Text". ACM Computing Surveys. 24 (4): 377–439. doi:10.1145/146370.146380.
- ^ Sutinen, Erkki (1995). "On Using -gram Locations in Approximate String Matching". Proceedings of Third Annual European Symposium on Algorithms (ESA'95). pp. 327–340.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b c Gravano, Luis (2001). "Approximate string joins in a database (almost) for free". Proceedings of the 27th International Conference on Very Large Databases (VLDB 2001). pp. 491–500.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Gravano, Luis (dec 2001). "Using -grams in a DBMS for Approximate String Processing". IEEE Data Engineering Bulletin. 24 (4): 28–34.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ a b Monge, Alvaro E. (1996). "The Field Matching Problem: Algorithms and Applications". Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). pp. 267–270.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Cohen, William Weston (1998). "Integration of heterogeneous databases without common domains using queries based on textual similarity". Proceedings of the 1998 ACM SIGMOD International Conference on Management of Data (SIGMOD'98). pp. 201–212.
- ^ a b Gravano, Luis (2003). "Text joins in an RDBMS for web data integration". Proceedings of the 12th International World Wide Web Conference (WWW12). pp. 90–101.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Russell, Robert C. (apr 1918), Index, U.S. Patent 1,261,167
{{citation}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ Russell, Robert C. (nov 1922), Index, U.S. Patent 1,435,663
{{citation}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ a b Taft, Robert L. (1970), "Name Search Techniques", Special Report No. 1, New York State Identification and Intelligence System, Albany, NY
- ^ Gill, Leicester E. (1997). "OX-LINK: The Oxford Medical Record Linkage System". Proceedings of the International Record Linkage Workshop and Exposition. pp. 15–33.
{{cite conference}}
: Cite has empty unknown parameter:|1=
(help) - ^ Philips, Lawrence (dec 1990). "Hanging on the Metaphone". Computer Language Magazine. 7 (12): 39–44.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ Philips, Lawrence (jun 2000). "The Double Metaphone Search Algorithm". C/C++ Users Journal. 18 (5).
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ a b Koudas, Nick (2004). "Flexible String Matching Against Large Databases in Practice". Proceedings of the 30th International Conference on Very Large Databases (VLDB 2004). pp. 1078–1086.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Agrawal, Rakesh (2002). "Searching with numbers". Proceedings of the 11th International World Wide Web Conference (WWW11). pp. 420–431.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Yancey, William E. (jun 2005), "Evaluating String Comparator Performance for Record Linkage", Statistical Research Report Series RRS2005/05, U.S. Bureau of the Census, Washington, D.C.
{{citation}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ a b Tejada, Sheila (2002). "Learning Domain-Independent String Transformation Weights for High Accuracy Object Identification". Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2002).
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b Hastie, Trevor (aug 2001). The Elements of Statistical Learning. Springer Verlag.
{{cite book}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Jaro, Matthew A. (jun 1989). "Advances in Record-Linkage Methodology as Applied to Matching the 1985 Census of Tampa, Florida". Journal of the American Statistical Association. 84 (406): 414–420. doi:10.1080/01621459.1989.10478785.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ Dempster, Arthur Pentland (1977). "Maximum likelihood from incomplete data via the EM algorithm". Journal of the Royal Statistical Society. B (39): 1–38.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Winkler, William E. (1993), "Improved Decision Rules In The Felligi-Sunter Model Of Record Linkage", Statistical Research Report Series RR93/12, U.S. Bureau of the Census, Washington, D.C.
- ^ a b Winkler, William E. (2002), "Methods For Record Linkage and Bayesian Networks", Statistical Research Report Series RR2002/05, U.S. Bureau of the Census, Washington, D.C.
- ^ Nigam, Kamal (2000). "Text Classification from Labeled and Unlabeled Documents using EM". Machine Learning. 39 (2/3): 103–134. doi:10.1023/A:1007692713085.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Du Bois, Jr., Nelson S. D'Andrea (mar 1969). "A Solution to the Problem of Linking Multivariate Documents". Journal of the American Statistical Association. 64 (325): 163–174.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ a b Duda, Richard Oswald (2000). Pattern Classification. Wiley.
{{cite book}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Verykios, Vassilios S. (2003). "A Bayesian decision model for cost optimal record matching". VLDB Journal. 12 (1): 28--40.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help); Unknown parameter|month=
ignored (help) - ^ Verykios, Vassilios S. (2004). "A generalized cost optimal decision model for record matching". Proceedings of the 2004 International Workshop on Information Quality in Information Systems. pp. 20–26.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Cochinwala, Munir (sep 2001). "Efficient data reconciliation". Information Sciences. 137 (1–4): 1–15. doi:10.1016/S0020-0255(00)00070-0.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Breiman, Leo (jul 1984). Classification and Regression Trees. CRC Press.
{{cite book}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Joachims, Thorsten (1999), Bernhard Scholkopf and Christopher John C. Burges and Alexander J. Smola (ed.), "Making large-Scale SVM Learning Practical", Advances in Kernel Methods - Support Vector Learning, MIT-Press
- ^ a b c d Monge, Alvaro E. (1997). "An Efficient Domain-Independent Algorithm for Detecting Approximately Duplicate Database Records". Proceedings of the 2nd ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery (DMKD'97). pp. 23–29.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b c Bansal, Nikhil (2004). "Correlation Clustering". Machine Learning. 56 (1–3): 89–113. doi:10.1023/B:MACH.0000033116.57574.95.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b Cohen, William Weston (2002). "Learning to Match and Cluster Large High-Dimensional Data Sets For Data Integration". Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2002).
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ McCallum, Andrew (2004). "Conditional Models of Identity Uncertainty with Application to Noun Coreference". Advances in Neural Information Processing Systems (NIPS 2004).
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Singla, Parag (2004). "Multi-Relational Record Linkage". KDD-2004 Workshop on Multi-Relational Data Mining. pp. 31–48.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Pasula, Hanna (2002). "Identity Uncertainty and Citation Matching". Advances in Neural Information Processing Systems (NIPS 2002). pp. 1401–1408.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Cohn, David A. (1994). "Improving Generalization with Active Learning". Machine Learning. 15 (2): 201–221. doi:10.1007/BF00993277.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Tejada, Sheila (2001). "Learning object identification rules for information integration". Information Systems. 26 (8): 607–633. doi:10.1016/S0306-4379(01)00042-4.
{{cite journal}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b Cohen, William W. (2000). "Data integration using similarity joins and a word-based information representation language". ACM Transactions on Information Systems. 18 (3): 288–321. doi:10.1145/352595.352598.
- ^ Dey, Debabrata (1998). "Entity Matching in Heterogeneous Databases: A Distance Based Decision Model". 31st Annual Hawaii International Conference on System Sciences (HICSS'98). pp. 305–313.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Guha, Sudipto (2004). "Merging the Results of Approximate Match Operations". Proceedings of the 30th International Conference on Very Large Databases (VLDB 2004). pp. 636–647.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Ahuja, Ravindra K. (feb 1993). Network Flows: Theory, Algorithms, and Applications. Prentice Hall.
{{cite book}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Ananthakrishna, Rohit (2002). "Eliminating Fuzzy Duplicates in Data Warehouses". Proceedings of the 28th International Conference on Very Large Databases (VLDB 2002).
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Chaudhuri, Surajit (2005). "Robust Identification of Fuzzy Duplicates". Proceedings of the 21st IEEE International Conference on Data Engineering (ICDE 2005). pp. 865–876.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Lim, Ee-Peng (1993). "Entity Identification in Database Integration". Proceedings of the Ninth IEEE International Conference on Data Engineering (ICDE 1993). pp. 294–301.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Galhardas, Helena (2001). "Declarative Data Cleaning: Language, Model, and Algorithms". Proceedings of the 27th International Conference on Very Large Databases (VLDB 2001). pp. 371–380.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b c Verykios, Vassilios S. (jul 2000). "Automating the Approximate Record Matching Process". Information Sciences. 126 (1–4): 83–98. doi:10.1016/S0020-0255(00)00013-X.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Blum, Avrim (1998). "Combining labeled and unlabeled data with co-training". COLT' 98: Proceedings of the eleventh annual conference on Computational learning theory. pp. 92–100.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Cheeseman, Peter (1996), "Bayesian Classification (AutoClass): Theory and Results", Advances in Knowledge Discovery and Data Mining, AAAI Press/The MIT Press
{{citation}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b Elfeky, Mohamed G. (2002). "A record linkage tool box". Proceedings of the 18th IEEE International Conference on Data Engineering (ICDE 2002). pp. 17–28.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Ravikumar, Pradeep (2004). "A Hierarchical Graphical Model for Record Linkage". 20th Conference on Uncertainty in Artificial Intelligence (UAI 2004).
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Bhattacharya, Indrajit (aug 2005), Latent Dirichlet Allocation Model for Entity Resolution, Computer Science Department, University of Maryland
{{citation}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ McCallum, Andrew (2002). "Efficient clustering of high-dimensional data sets with application to reference matching". Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2000). pp. 169–178.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Chaudhuri, Surajit (2003). "Robust and Efficient Fuzzy Match for Online Data Cleaning". Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data (SIGMOD 2003). pp. 313–324.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Baxter, Rohan (2003). "A Comparison of Fast Blocking Methods for Record Linkage". ACM SIGKDD '03 Workshop on Data Cleaning, Record Linkage, and Object Consolidation. pp. 25–27.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ a b Soffer, Aya (2001). "Static Index Pruning for Information Retrieval Systems". Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2001. pp. 43–50.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Mamoulis, Nikos (2003). "Efficient Processing of Joins on Set-valued Attributes". Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data (SIGMOD 2003). pp. 157–168.
- ^ Zobel, Justin (dec 1998). "Inverted Files Versus Signature Files for Text Indexing". ACM Transactions on Database Systems. 23 (4): 453–490. doi:10.1145/296854.277632.
{{cite journal}}
: Check date values in:|date=
(help); Unknown parameter|coauthors=
ignored (|author=
suggested) (help)CS1 maint: date and year (link) - ^ Sarawagi, Sunita (2004). "Efficient set joins on similarity predicates". Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data (SIGMOD 2004). pp. 743–754.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Koller, Daphne (1997). "Hierarchically Classifying Documents Using Very Few Words". Proceedings of the 14th International Conference on Machine Learning (ICML'97). pp. 170–178.
{{cite conference}}
: Unknown parameter|coauthors=
ignored (|author=
suggested) (help) - ^ Yancey, William E. (mar 2002), "BigMatch: A Program for Extracting Probable Matches from a Large File for Record Linkage", Statistical Research Report Series RRC2002/01, U.S. Bureau of the Census, Washington, D.C.
{{citation}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ Winkler, William E. (2006), "Overview of record linkage and current research directions", Statistical Research Report Series RRC2006/02, U.S. Bureau of the Census, Washington, D.C.
- ^ Koudas, Nick (jun 2006). "Special Issue on Data Quality". IEEE Data Engineering Bulletin.
{{cite conference}}
: Check date values in:|date=
(help)CS1 maint: date and year (link) - ^ Winkler, William E. (1999), "The State of Record Linkage and Current Research Problems", Statistical Research Report Series RR99/04, U.S. Bureau of the Census, Washington, D.C.