HuggingFace Bert Анализ настроений

Я получаю следующую ошибку:

AssertionError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples)., когда я бегу classifier(encoded). Мой тип текста str, поэтому я не уверен, что делаю не так. Любая помощь очень ценится.

import torch
from transformers import AutoTokenizer, BertTokenizer, BertModel, BertForMaskedLM, AutoModelForSequenceClassification, pipeline

# OPTIONAL: if you want to have more information on what's happening under the hood, activate the logger as follows
import logging
logging.basicConfig(level=logging.INFO)

# Load pre-trained model tokenizer (vocabulary)
# used the cased instead of uncased to account for cases like BAD.
tokenizer = BertTokenizer.from_pretrained('bert-base-cased') 


# alternative? what is the difference between these two tokenizers? 
#tokenizer = AutoTokenizer.from_pretrained("textattack/bert-base-uncased-SST-2")

model = AutoModelForSequenceClassification.from_pretrained("textattack/bert-base-uncased-SST-2")


# feed the model and the tokenizer into the pipeline
classifier = pipeline('sentiment-analysis', model=model, tokenizer= tokenizer)


#---------------sample raw input passage--------

text = "Who was Jim Henson ? Jim Henson was a puppeteer. He is simply awful."
# tokenized_text = tokenizer.tokenize(text)

#----------Tokenization and Padding---------
# Encode the sentences to get tokenized and add padding stuff
encoded = tokenizer.encode_plus(
    text=text,  # the sentences to be encoded
    add_special_tokens=True,  # Add [CLS] and [SEP] !!!
    max_length = 64,  # maximum length of a sentence  (TODO Figure the longest passage length)
    pad_to_max_length=True,  # Add [PAD]s
    return_attention_mask = True,  # Generate the attention mask
    truncation=True,  #explicitly truncate examples to max length
    return_tensors = 'pt',  # ask the function to return PyTorch tensors
)

#-------------------------------------------
# view the IDs
for key, value in encoded.items():
    print(f"{key}: {value.numpy().tolist()}")
    
#-------------------------------------------


classifier(encoded)




Ответы (1)


В конвейер уже включен кодировщик. Вместо того

classifier(encoded)

do

classifier(text)
person PascalIv    schedule 25.01.2021