Author_Institution :
Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH, USA
Abstract :
Increasingly, many data sources appear as online databases, hidden behind query forms, thus forming the deep Web. The popularity of this new medium for data dissemination is leading to new problems in data integration. Particularly, to enable data integration from multiple deep Web data sources, one needs to obtain the metadata for each of the data sources. Obtaining the metadata, particularly, the output schema, can be very challenging. This is because, given an input query, many deep web data sources only return a subset of the output schema attributes, i.e, the ones that have a non-NULL value for the corresponding input. In this paper, we propose two approaches, which are the sampling model approach and the mixture model approach, respectively, to efficiently obtain an approximately complete set of output schema attributes from a deep Web data source. Our experiments show while each of the above two approaches has limitations, a hybrid strategy, where we combine the two approaches, achieves high recall with good precision for most data sources.
Keywords :
Internet; meta data; data dissemination; data integration; online databases; output metadata extraction; sampling model; scientific deep Web data sources; Computer science; Data engineering; Data mining; Databases; Documentation; HTML; Humans; Sampling methods; USA Councils; Web pages; deep web; schema extraction;