Logo Search packages:      
Sourcecode: beagle version File versions  Download package

Class List

Here are the classes, structs, unions and interfaces with brief descriptions:
Beagle::Util::Mozilla::AccountClass representing a Mozilla account
Lucene::Net::Search::BooleanQuery::TooManyClausesThrown when an attempt is made to add more than GetMaxClauseCount() clauses. This typically happens if a PrefixQuery, FuzzyQuery, WildcardQuery, or RangeQuery is expanded to many terms during search
Lucene::Net::Search::BooleanScorer2An alternative to BooleanScorer.
Uses ConjunctionScorer, DisjunctionScorer, ReqOptScorer and ReqExclScorer.
Implements skipTo(), and has no limitations on the numbers of added scorers
Lucene::Net::Search::BooleanScorer2::SingleMatchScorerCount a scorer as a single match
Lucene::Net::Search::BooleanScorer::BucketTableA simple hash table of document scores within a range
Lucene::Net::Store::BufferedIndexOutputBase implementation class for buffered IndexOutput
Lucene::Net::Search::CachingWrapperFilterWraps another filter's result and caches it. The caching behavior is like QueryFilter. The purpose is to allow filters to simply filter, and then wrap with this class to add caching, keeping the two concerns decoupled yet composable
Lucene::Net::Analysis::Standard::CharStreamThis interface describes a character stream that maintains line and column number positions of the characters. It also has the capability to backup the stream to some extent. An implementation of this interface is used in the TokenManager implementation generated by JavaCCParser
Lucene::Net::QueryParsers::CharStreamThis interface describes a character stream that maintains line and column number positions of the characters. It also has the capability to backup the stream to some extent. An implementation of this interface is used in the TokenManager implementation generated by JavaCCParser
Lucene::Net::Analysis::CharTokenizerAn abstract base class for simple, character-oriented tokenizers
Lucene::Net::Index::CompoundFileReaderClass for accessing a compound stream. This class implements a directory, but is limited to only read operations. Directory methods that would normally modify data throw an exception
Lucene::Net::Index::CompoundFileReader::CSIndexInputImplementation of an IndexInput that reads from a portion of the compound file. The visibility is left as "package" *only* because this helps with testing since JUnit test cases in a different class can then access package fields of this class
Lucene::Net::Index::CompoundFileWriterCombines multiple files into a single compound file. The file format:
Lucene::Net::Search::ConjunctionScorerScorer for conjunctions, sets of queries, all of which are required
Lucene::Net::Search::ConstantScoreQueryA query that wraps a filter and simply returns a constant score equal to the query boost for every document in the filter
Lucene::Net::Search::ConstantScoreRangeQueryA range query that returns a constant score equal to it's boost for all documents in the range
HtmlAgilityPack::Crc32A utility class to compute CRC32
Lucene::Net::Search::DateFilterA Filter that restricts search results to a range of time
Lucene::Net::Documents::DateToolsProvides support for converting dates to strings and vice-versa. The strings are structured so that lexicographic sorting orders them by date, which makes them suitable for use as field values and search terms
Lucene::Net::Search::DisjunctionMaxQueryA query that generates the union of the documents produced by its subqueries, and that scores each document as the maximum score for that document produced by any subquery plus a tie breaking increment for any additional matching subqueries. This is useful to search for a word in multiple fields with different boost factors (so that the fields cannot be combined equivalently into a single search field). We want the primary score to be the one associated with the highest boost, not the sum of the field scores (as BooleanQuery would give). If the query is "albino elephant" this ensures that "albino" matching one field and "elephant" matching another gets a higher score than "albino" matching both fields. To get this result, use both BooleanQuery and DisjunctionMaxQuery: for each term a DisjunctionMaxQuery searches for it in each field, while the set of these DisjunctionMaxQuery's is combined into a BooleanQuery. The tie breaker capability allows results that include the same term in multiple fields to be judged better than results that include this term in only the best of those multiple fields, without confusing this with the better case of two different terms in the multiple fields
Lucene::Net::Search::DisjunctionMaxScorerThe Scorer for DisjunctionMaxQuery's. The union of all documents generated by the the subquery scorers is generated in document number order. The score for each document is the maximum of the scores computed by the subquery scorers that generate that document, plus tieBreakerMultiplier times the sum of the scores for the other subqueries that generate the document
Lucene::Net::Search::DisjunctionSumScorerA Scorer for OR like queries, counterpart of Lucene's
. This Scorer implements Scorer#SkipTo(int) and uses skipTo() on the given Scorers
that orders by Scorer#Doc()
Lucene::Net::Analysis::Standard::FastCharStreamAn efficient implementation of JavaCC's CharStream interface
Lucene::Net::QueryParsers::FastCharStreamAn efficient implementation of JavaCC's CharStream interface
Lucene::Net::Search::FieldCacheImplExpert: The default cache implementation, storing all values in memory. A WeakHashMap is used for storage
Lucene::Net::Search::FieldCacheImpl::EntryExpert: Every key in the internal cache is of this type
Lucene::Net::Search::FieldDocExpert: A ScoreDoc which also contains information about how to sort the referenced document. In addition to the document number and score, this object contains an array of values for the document from the field(s) used to sort. For example, if the sort criteria was to sort by fields "a", "b" then "c", the object array will have three elements, corresponding respectively to the term values for the document in fields "a", "b" and "c". The class of each element in the array will be either Integer, Float or String depending on the type of values in the terms of each field
Lucene::Net::Search::FieldDocSortedHitQueueExpert: Collects sorted results from Searchable's and collates them. The elements put into this queue must be of type FieldDoc
Lucene::Net::Index::FieldInfosAccess to the Field Info file that describes document fields and whether or not they are indexed. Each segment has a separate Field Info file. Objects of this class are thread-safe for multiple readers, but only one thread can be adding documents at a time, with no other reader or writer threads accessing this object
Lucene::Net::Search::FieldSortedHitQueueExpert: A hit queue for sorting by hits by terms in more than one field. Uses
for maintaining internal term lookup tables
Lucene::Net::Index::FieldsReaderClass responsible for access to stored document fields
Lucene::Net::Search::FilterAbstract base class providing a mechanism to restrict searches to a subset of an index
Lucene::Net::Search::FilteredQueryA query that applies a filter to the results of another query
Lucene::Net::Index::FilterIndexReaderA contains another IndexReader, which it uses as its basic source of data, possibly transforming the data along the way or providing additional functionality. The class itself simply implements all abstract methods of with versions that pass all requests to the contained index reader. Subclasses of may further override some of these methods and may also provide additional methods and fields
Lucene::Net::Index::FilterIndexReader::FilterTermDocsBase class for filtering TermDocs implementations
Lucene::Net::Index::FilterIndexReader::FilterTermEnumBase class for filtering TermEnum implementations
Lucene::Net::Index::FilterIndexReader::FilterTermPositionsBase class for filtering TermPositions implementations
Lucene::Net::Search::FloatParserInterface to parse floats from document fields
Lucene::Net::Search::FuzzyQueryImplements the fuzzy search query. The similiarity measurement is based on the Levenshtein (edit distance) algorithm
Lucene::Net::Search::FuzzyTermEnumSubclass of FilteredTermEnum for enumerating all terms that are similiar to the specified filter term
Lucene::Net::Search::HitWrapper used by HitIterator to provide a lazily loaded hit from Hits
Lucene::Net::Search::HitCollectorLower-level search API.
HitCollectors are primarily meant to be used to implement queries, sorting and filtering
Lucene::Net::Search::HitIteratorAn iterator over Hits that provides lazy fetching of each document. Hits#Iterator() returns an instance of this class. Calls to next() return a Hit instance
HtmlAgilityPack::HtmlAttributeRepresents an HTML attribute
HtmlAgilityPack::HtmlAttributeCollectionRepresents a combined list and collection of HTML nodes
HtmlAgilityPack::HtmlAttributeCollection::HtmlAttributeEnumeratorRepresents an enumerator that can iterate through the list
HtmlAgilityPack::HtmlCommentNodeRepresents an HTML comment
HtmlAgilityPack::HtmlDocumentRepresents a complete HTML document
HtmlAgilityPack::HtmlEntityA utility class to replace special characters by entities and vice-versa. Follows HTML 4.0 specification found at http://www.w3.org/TR/html4/sgml/entities.html
HtmlAgilityPack::HtmlNodeRepresents an HTML node
HtmlAgilityPack::HtmlNodeCollectionRepresents a combined list and collection of HTML nodes
HtmlAgilityPack::HtmlNodeCollection::HtmlNodeEnumeratorRepresents an enumerator that can iterate through the list
HtmlAgilityPack::HtmlNodeNavigatorRepresents an HTML navigator on an HTML document seen as a data store
HtmlAgilityPack::HtmlParseErrorRepresents a parsing error found during document parsing
HtmlAgilityPack::HtmlTextNodeRepresents an HTML text node
Lucene::Net::Index::IndexFileNamesUseful constants representing filenames and extensions used by lucene
Lucene::Net::Index::IndexModifierA class to modify an index, i.e. to delete and add documents. This class hides IndexReader and IndexWriter so that you do not need to care about implementation details such as that adding documents is done via IndexWriter and deletion is done via IndexReader
Lucene::Net::Search::IndexSearcherImplements search over a single IndexReader
Lucene::Net::Search::IntParserInterface to parse ints from document fields
Lucene::Net::Analysis::ISOLatin1AccentFilterA filter that replaces accented characters in the ISO Latin 1 character set (ISO-8859-1) by their unaccented equivalent. The case will not be altered
IThreadRunnableThis interface should be implemented by any class whose instances are intended to be executed by a thread
Lucene::Net::Analysis::KeywordAnalyzer"Tokenizes" the entire stream as a single token. This is useful for data like zip codes, ids, and some product names
Lucene::Net::Analysis::KeywordTokenizerEmits the entire input as a single token
Lucene::Net::Analysis::LengthFilterRemoves words that are too long and too short from the stream
Lucene::Net::Analysis::LetterTokenizerA LetterTokenizer is a tokenizer that divides text at non-letters. That's to say, it defines tokens as maximal strings of adjacent letters, as defined by java.lang.Character.isLetter() predicate. Note: this does a decent job for most European languages, but does a terrible job for some Asian languages, where words are not separated by spaces
Lucene::Net::Store::Lock::WithUtility class for executing code with exclusive access
Lucene::Net::Analysis::LowerCaseFilterNormalizes token text to lower case
Lucene::Net::Analysis::LowerCaseTokenizerLowerCaseTokenizer performs the function of LetterTokenizer and LowerCaseFilter together. It divides text at non-letters and converts them to lower case. While it is functionally equivalent to the combination of LetterTokenizer and LowerCaseFilter, there is a performance advantage to doing the two tasks at once, hence this (redundant) implementation
Lucene::Net::Analysis::AnalyzerAn Analyzer builds TokenStreams, which analyze text. It thus represents a policy for extracting index terms from text
Lucene::Net::Analysis::TokenA Token is an occurence of a term from the text of a field. It consists of a term's text, the start and end offset of the term in the text of the field, and a type string. The start and end offsets permit applications to re-associate a token with its source text, e.g., to display highlighted query terms in a document browser, or to show matching text fragments in a KWIC (KeyWord In Context) display, etc. The type is an interned string, assigned by a lexical analyzer (a.k.a. tokenizer), naming the lexical or syntactic class that the token belongs to. For example an end of sentence marker token might be implemented with type "eos". The default token type is "word"
Lucene::Net::Analysis::TokenStreamA TokenStream enumerates the sequence of tokens, either from fields of a document or from query text
Lucene::Net::Documents::DateFieldProvides support for converting dates to strings and vice-versa. The strings are structured so that lexicographic sorting orders by date, which makes them suitable for use as field values and search terms
Lucene::Net::Documents::DocumentDocuments are the unit of indexing and search
Lucene::Net::Documents::FieldA field is a section of a Document. Each field has two parts, a name and a value. Values may be free text, provided as a String or as a Reader, or they may be atomic keywords, which are not further processed. Such keywords may be used to represent dates, urls, etc. Fields are optionally stored in the index, so that they may be returned with hits on the document
Lucene::Net::Index::IndexFileNameFilterFilename filter that accept filenames and extensions only created by Lucene
Lucene::Net::Index::IndexReaderIndexReader is an abstract class, providing an interface for accessing an index. Search of an index is done entirely through this abstract interface, so that any subclass which implements it is searchable
Lucene::Net::Index::IndexWriterAn IndexWriter creates and maintains an index. The third argument to the constructor determines whether a new index is created, or whether an existing index is opened for the addition of new documents. In either case, documents are added with the addDocument method. When finished adding documents, close should be called
Lucene::Net::Index::MultipleTermPositionsDescribe class here
Lucene::Net::Index::TermA Term represents a word from text. This is the unit of search. It is composed of two elements, the text of the word, as a string, and the name of the field that the text occured in, an interned string. Note that terms may represent more than words from text fields, but also things like dates, email addresses, urls, etc
Lucene::Net::Index::TermDocsTermDocs provides an interface for enumerating <document, frequency> pairs for a term
Lucene::Net::Index::TermEnumAbstract class for enumerating terms
Lucene::Net::Index::TermFreqVectorProvides access to stored term vector of a document field
Lucene::Net::Index::TermPositionsTermPositions provides an interface for enumerating the <document, frequency, <position>* > tuples for a term
Lucene::Net::Search::BooleanClauseA clause in a BooleanQuery
Lucene::Net::Search::BooleanQueryA Query that matches documents matching boolean combinations of other queries, e.g. TermQuerys, PhraseQuerys or other BooleanQuerys
Lucene::Net::Search::DefaultSimilarityExpert: Default scoring implementation
Lucene::Net::Search::ExplanationExpert: Describes the score computation for document and query
Lucene::Net::Search::FilteredTermEnumAbstract class for enumerating a subset of all terms
Lucene::Net::Search::HitsA ranked list of documents, used to hold search results
Lucene::Net::Search::MultiPhraseQueryMultiPhraseQuery is a generalized version of PhraseQuery, with an added method Add(Term[]). To use this class, to search for the phrase "Microsoft app*" first use add(Term) on the term "Microsoft", then find all terms that have "app" as prefix using IndexReader.terms(Term), and use MultiPhraseQuery.add(Term[] terms) to add them to the query
Lucene::Net::Search::MultiTermQueryA Query that matches documents containing a subset of terms provided by a FilteredTermEnum enumeration
Lucene::Net::Search::PhraseQueryA Query that matches documents containing a particular sequence of terms. A PhraseQuery is built by QueryParser for input like
"new york"
Lucene::Net::Search::PrefixQueryA Query that matches documents containing terms with a specified prefix. A PrefixQuery is built by QueryParser for input like
Lucene::Net::Search::QueryThe abstract base class for queries
Lucene::Net::Search::RangeQueryA Query that matches documents within an exclusive range. A RangeQuery is built by QueryParser for input like
[010 TO 120]
Lucene::Net::Search::ScorerExpert: Common scoring functionality for different types of queries.
A either iterates over documents matching a query, or provides an explanation of the score for a query for a given document.
Document scores are computed using a given
Lucene::Net::Search::SearchableThe interface for search implementations
Lucene::Net::Search::SearcherAn abstract base class for search implementations. Implements the main search methods
Lucene::Net::Search::SimilarityExpert: Scoring API
Lucene::Net::Search::Spans::SpanOrQueryMatches the union of its clauses
Lucene::Net::Search::Spans::SpanQueryBase class for span-based queries
Lucene::Net::Search::Spans::SpanTermQueryMatches spans containing a term
Lucene::Net::Search::StringIndexExpert: Maintains caches of term values
Lucene::Net::Search::TermQueryA Query that matches documents containing a term. This may be combined with other terms with a BooleanQuery
Lucene::Net::Search::WeightExpert: Calculate query weights and build query scorers
Lucene::Net::Store::BufferedIndexInputBase implementation class for buffered IndexInput
Lucene::Net::Store::DirectoryA Directory is a flat list of files. Files may be written once, when they are created. Once a file is created it may only be opened for read, or deleted. Random access is permitted both when reading and writing
Lucene::Net::Store::FSDirectoryStraightforward implementation of Directory as a directory of files
Lucene::Net::Store::IndexInputAbstract base class for input from a file in a Directory. A random-access input stream. Used for all Lucene index input operations
Lucene::Net::Store::IndexOutputAbstract base class for output to a file in a Directory. A random-access output stream. Used for all Lucene index output operations
Lucene::Net::Store::LockAn interprocess mutex lock
Lucene::Net::Store::RAMDirectoryA memory-resident Directory implementation
Lucene::Net::Store::RAMOutputStreamA memory-resident IndexOutput implementation
Lucene::Net::Util::BitVectorOptimized implementation of a vector of bits. This is more-or-less like java.util.BitSet, but also includes the following:
Lucene::Net::Util::ConstantsSome useful constants
Lucene::Net::Util::ParameterA serializable Enum class
Lucene::Net::Util::PriorityQueueA PriorityQueue maintains a partial ordering of its elements such that the least element can always be found in constant time. Put()'s and pop()'s require log(size) time
Lucene::Net::Util::SmallFloatFloating point numbers smaller than 32 bits
Lucene::Net::Util::StringHelperMethods for manipulating strings
Lucene::Net::LucenePackageLucene's package information, including version. *
Lucene::Net::Search::MatchAllDocsQueryA query that matches all documents
Beagle::Util::Mozilla::MessageMessage (mail, rss whatever) in Mozilla
Beagle::Util::Mozilla::MessageReaderFIXME: This is a hack and does not comply with any RFC, nor does it support attachments, encodings and other fancy shit FIXME: Use a lib like gmime to parse messages, must be available on Linux, Win32 & MacOSX
HtmlAgilityPack::MixedCodeDocumentRepresents a document with mixed code and text. ASP, ASPX, JSP, are good example of such documents
HtmlAgilityPack::MixedCodeDocumentCodeFragmentRepresents a fragment of code in a mixed code document
HtmlAgilityPack::MixedCodeDocumentFragmentRepresents a base class for fragments in a mixed code document
HtmlAgilityPack::MixedCodeDocumentFragmentListRepresents a list of mixed code fragments
HtmlAgilityPack::MixedCodeDocumentFragmentList::MixedCodeDocumentFragmentEnumeratorRepresents a fragment enumerator
HtmlAgilityPack::MixedCodeDocumentTextFragmentRepresents a fragment of text in a mixed code document
Lucene::Net::Store::MMapDirectoryFile-based Directory implementation that uses mmap for input
Lucene::Net::QueryParsers::MultiFieldQueryParserA QueryParser which constructs queries to search multiple fields
Lucene::Net::Index::MultiReaderAn IndexReader which reads multiple indexes, appending their content
Lucene::Net::Search::MultiSearcherImplements search over a set of
Lucene::Net::Search::MultiSearcher::CachedDfSourceDocument Frequency cache acting as a Dummy-Searcher. This class is no full-fledged Searcher, but only supports the methods necessary to initialize Weights
Lucene::Net::Search::MultiSearcherThreadA thread subclass for searching a single searchable
Lucene::Net::Search::Spans::NearSpans::SpansCellWraps a Spans, and can be used to form a linked list
Lucene::Net::Search::NonMatchingScorerA scorer that matches no document at all
Lucene::Net::Documents::NumberToolsProvides support for converting longs to Strings, and back again. The strings are structured so that lexicographic sorting order is preserved
Lucene::Net::Search::ParallelMultiSearcherImplements parallel search over a set of
Lucene::Net::Index::ParallelReaderAn IndexReader which reads multiple, parallel indexes. Each index added must have the same number of documents, but typically each contains different fields. Each document contains the union of the fields of all documents with the same document number. When searching, matches for a query term are from the first index added that has the field
Lucene::Net::Analysis::Standard::ParseExceptionThis exception is thrown when parse errors are encountered. You can explicitly create objects of this exception type by calling the method generateParseException in the generated parser
Lucene::Net::QueryParsers::ParseExceptionThis exception is thrown when parse errors are encountered. You can explicitly create objects of this exception type by calling the method generateParseException in the generated parser
Lucene::Net::Analysis::PerFieldAnalyzerWrapperThis analyzer is used to facilitate scenarios where different fields require different analysis techniques. Use addAnalyzer to add a non-default analyzer on a field name basis
Lucene::Net::Search::PhrasePrefixQueryPhrasePrefixQuery is a generalized version of PhraseQuery, with an added method Add(Term[]). To use this class, to search for the phrase "Microsoft app*" first use add(Term) on the term "Microsoft", then find all terms that has "app" as prefix using IndexReader.terms(Term), and use PhrasePrefixQuery.add(Term[] terms) to add them to the query
Lucene::Net::Analysis::PorterStemFilterTransforms the token stream as per the Porter stemming algorithm. Note: the input to the stemming filter must already be in lower case, so you will need to use LowerCaseFilter or LowerCaseTokenizer farther down the Tokenizer chain in order for this to work properly!
Lucene::Net::Analysis::PorterStemmerStemmer, implementing the Porter Stemming Algorithm
Beagle::Util::Mozilla::PreferencesClass for parsing Mozilla preferences files - prefs.js
Beagle::Util::Mozilla::ProfileClass representing a Mozilla profile, used to get a user's profiles and accounts
Lucene::Net::Search::QueryFilterConstrains search results to only match those which also match a provided query. Results are cached, so that searches after the first on the same index using this filter are much faster
Lucene::Net::QueryParsers::QueryParserThis class is generated by JavaCC. The most important method is Parse(String)
Lucene::Net::QueryParsers::QueryParser::OperatorThe default operator for parsing queries. Use QueryParser#setDefaultOperator to change it
Lucene::Net::Store::RAMInputStreamA memory-resident IndexInput implementation
Lucene::Net::Search::RangeFilterA Filter that restricts search results to a range of values in a given field
Lucene::Net::Search::RemoteSearchableA remote searchable implementation
Lucene::Net::Search::ReqExclScorerA Scorer for queries with a required subscorer and an excluding (prohibited) subscorer.
This implements Scorer#SkipTo(int), and it uses the skipTo() on the given scorers
Lucene::Net::Search::ReqOptSumScorerA Scorer for queries with a required part and an optional part. Delays skipTo() on the optional part until a score() is needed.
This implements Scorer#SkipTo(int)
Lucene::Net::Search::ScoreDocExpert: Returned by low-level search implementations
Lucene::Net::Search::ScoreDocComparator_FieldsExpert: Compares two ScoreDoc objects for sorting
Lucene::Net::Index::SegmentMergerCombines two or more Segments, represented by an IndexReader (add, into a single Segment. After adding the appropriate readers, call the merge method to combine the segments
Lucene::Net::Search::SimilarityDelegatorExpert: Delegating scoring implementation. Useful in Query#GetSimilarity(Searcher) implementations, to override only certain methods of a Searcher's Similiarty implementation.
Lucene::Net::Analysis::SimpleAnalyzerAn Analyzer that filters LetterTokenizer with LowerCaseFilter
Lucene::Net::Search::SortEncapsulates sort criteria for returned hits
Lucene::Net::Search::SortComparatorAbstract base class for sorting hits returned by a Query
Lucene::Net::Search::SortComparatorSourceExpert: returns a comparator for sorting ScoreDocs
Lucene::Net::Search::SortFieldStores information about how to sort documents by terms in an individual field. Fields must be indexed in order to sort by them
Lucene::Net::Search::Spans::SpanFirstQueryMatches spans near the beginning of a field
Lucene::Net::Search::Spans::SpanNearQueryMatches spans which are near one another. One can specify slop, the maximum number of intervening unmatched positions, as well as whether matches are required to be in-order
Lucene::Net::Search::Spans::SpanNotQueryRemoves matches which overlap with another SpanQuery
Lucene::Net::Search::Spans::SpansExpert: an enumeration of span matches. Used to implement span searching. Each span represents a range of term positions within a document. Matches are enumerated in order, by increasing document number, within that by increasing start position and finally by increasing end position
Mono::Data::SqliteClient::SqliteProvides the core of C# bindings to the library sqlite.dll
Mono::Data::SqliteClient::SqliteDataAdapterRepresents a set of data commands and a database connection that are used to fill the DataSet and update the data source
Mono::Data::SqliteClient::SqliteRowUpdatedEventArgsProvides data for the SqliteDataAdapter.RowUpdated event
Mono::Data::SqliteClient::SqliteRowUpdatingEventArgsProvides data for the SqliteDataAdapter.RowUpdating event
Lucene::Net::Analysis::Standard::StandardAnalyzerFilters StandardTokenizer with StandardFilter, LowerCaseFilter and StopFilter, using a list of English stop words
Lucene::Net::Analysis::Standard::StandardFilterNormalizes tokens extracted with StandardTokenizer
Lucene::Net::Analysis::Standard::StandardTokenizerA grammar-based tokenizer constructed with JavaCC
Lucene::Net::Analysis::StopAnalyzerFilters LetterTokenizer with LowerCaseFilter and StopFilter
Lucene::Net::Analysis::StopFilterRemoves stop words from a token stream
SupportClassContains conversion support elements such as classes, interfaces and static methods
SupportClass::CharacterMimics Java's Character class
SupportClass::CompressionSupportUse for .NET 1.1 Framework only
SupportClass::FileSupportRepresents the methods to support some operations over files
SupportClass::NumberA simple class for number conversions
SupportClass::ThreadClassSupport class used to handle threads
Lucene::Net::Index::TermInfoA TermInfo is the record of information stored for a term
Lucene::Net::Index::TermInfosReaderThis stores a monotonically increasing set of <Term, TermInfo> pairs in a Directory. Pairs are accessed either by Term or by ordinal position the set
Lucene::Net::Index::TermInfosWriterThis stores a monotonically increasing set of <Term, TermInfo> pairs in a Directory. A TermInfos can be written once, in order
to provide additional information about positions in which each of the terms is found. A TermPositionVector not necessarily contains both positions and offsets, but at least one of these arrays exists
Lucene::Net::Search::TermScorerExpert: A for documents matching a
Lucene::Net::Index::TermVectorsWriterWriter works by opening a document and then opening the fields within the document and then writing out the vectors for each field
Lucene::Net::Analysis::Standard::TokenDescribes the input token stream
Lucene::Net::QueryParsers::TokenDescribes the input token stream
Lucene::Net::Analysis::TokenFilterA TokenFilter is a TokenStream whose input is another token stream
Lucene::Net::Analysis::TokenizerA Tokenizer is a TokenStream whose input is a Reader
Lucene::Net::Search::TopDocsExpert: Returned by low-level search implementations
Lucene::Net::Search::TopFieldDocsExpert: Returned by low-level sorted search implementations
Lucene::Net::Analysis::WhitespaceAnalyzerAn Analyzer that uses WhitespaceTokenizer
Lucene::Net::Analysis::WhitespaceTokenizerA WhitespaceTokenizer is a tokenizer that divides text at whitespace. Adjacent sequences of non-Whitespace characters form tokens
Lucene::Net::Search::WildcardQueryImplements the wildcard search query. Supported wildcards are
, which matches any character sequence (including the empty one), and
, which matches any single character. Note this query can be slow, as it needs to iterate over many terms. In order to prevent extremely slow WildcardQueries, a Wildcard term should not start with one of the wildcards
Lucene::Net::Search::WildcardTermEnumSubclass of FilteredTermEnum for enumerating all terms that match the specified wildcard filter term
Lucene::Net::Analysis::WordlistLoaderLoader for text files that represent a list of stopwords

Generated by  Doxygen 1.6.0   Back to index