BERT encoding is very slow - Help
이전 댓글 표시
I've been following this github: https://github.com/matlab-deep-learning/transformer-models which is the MATLAB implementation of BERT.
While trying to encode my text using the tokenizer, following this script, I realize that BERT encoding takes very long to work on my dataset.
My dataset contains 1000+ text entries, each of which is ~1000 in length. I noticed that the example csv used in the github contains very short description text. My question is: how can we perform text preprocessing using BERT encoding? And how we can speed up the encoding process?
Thanks!
채택된 답변
추가 답변 (1개)
Ralf Elsas
2023년 2월 26일
0 개 추천
카테고리
도움말 센터 및 File Exchange에서 Weather and Atmospheric Science에 대해 자세히 알아보기
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!