• DocumentCode
    993487
  • Title

    Improving data quality: eliminating dupes & I-D-ing those spurious links

  • Author

    Lee, Mong Li ; Hsu, Wynne

  • Author_Institution
    Sch. of Comput., Nat. Univ. of Singapore, Singapore
  • Volume
    24
  • Issue
    2
  • fYear
    2005
  • Firstpage
    35
  • Lastpage
    38
  • Abstract
    Dirty data arise as a result of abbreviations, data entry mistakes, duplicate records, missing fields and so forth. This problem is aggravated when multiple data sources need to be integrated. Data cleaning refers to a series of processes employed to deal with detecting and removing errors and inconsistencies from data. Given the "garbage in, garbage out" principle, clean data is crucial for database integration, data warehousing and data mining. Data cleaning is a necessary step prior to the knowledge discovery process. We have reviewed a knowledge-based framework that provides for the definition of duplicate identification rules. We have described a context-based approach to identify these spurious links in the data.
  • Keywords
    data integrity; data mining; database management systems; knowledge based systems; abbreviations; data cleaning; data entry mistakes; data integrity; data mining; data quality improvement; dirty data; duplicate records; garbage collection; knowledge discovery; knowledge-based system; spurious link identification; Association rules; Cleaning; Communication networks; Computer networks; Data mining; Database systems; Humans; Intelligent systems; Performance analysis; Sorting;
  • fLanguage
    English
  • Journal_Title
    Potentials, IEEE
  • Publisher
    ieee
  • ISSN
    0278-6648
  • Type

    jour

  • DOI
    10.1109/MP.2005.1462465
  • Filename
    1462465