nltk.tokenize.WhitespaceTokenizer¶
- class nltk.tokenize.WhitespaceTokenizer[source]¶
Bases:
RegexpTokenizer
Tokenize a string on whitespace (space, tab, newline). In general, users should use the string
split()
method instead.>>> from nltk.tokenize import WhitespaceTokenizer >>> s = "Good muffins cost $3.88\nin New York. Please buy me\ntwo of them.\n\nThanks." >>> WhitespaceTokenizer().tokenize(s) ['Good', 'muffins', 'cost', '$3.88', 'in', 'New', 'York.', 'Please', 'buy', 'me', 'two', 'of', 'them.', 'Thanks.']
- span_tokenize(text)¶
Identify the tokens using integer offsets
(start_i, end_i)
, wheres[start_i:end_i]
is the corresponding token.- Return type
Iterator[Tuple[int, int]]
- span_tokenize_sents(strings: List[str]) Iterator[List[Tuple[int, int]]] ¶
Apply
self.span_tokenize()
to each element ofstrings
. I.e.:return [self.span_tokenize(s) for s in strings]
- Yield
List[Tuple[int, int]]
- Parameters
strings (List[str]) –
- Return type
Iterator[List[Tuple[int, int]]]