New Version, Extension: https://chromewebstore.google.com/detail/chatwork-gpt4-chatgpt-sid/mlinajkbaceacmgohnehoopkmajnleaf nlpcloud-ai-integrator™ add-on is a writing helper, powered by: google flan-t5-large model AI, fast-gpt-j AI(GPT3-like AI), openai whisper large AI model V2, finetuned-gpt-neox-20b AI(GPT3-like AI). parrot_paraphraser_on_T5 model AI. It provides summary, keyword extraction, rewriting, spelling correction, headline generation, speech to text functions. Select a piece of text, and it will output valuable results. Try it now! Google flan-t5-large model AI info If you already know T5, FLAN-T5 is just better at everything. For the same number of parameters, these models have been fine-tuned on more than 1000 additional tasks covering also more languages. GPT-NeoX-20B AI model Info GPT-NeoX-20B is a 20 billion parameter autoregressive language model trained on the Pile using the GPT-NeoX library. Its architecture intentionally resembles that of GPT-3, and is almost identical to that of GPT-J-6B. Its training dataset contains a multitude of English-language texts, reflecting the general-purpose nature of this model. See the accompanying paper for details about model architecture (including how it differs from GPT-3), training procedure, and additional evaluations. Model details Developed by: EleutherAI Model type: Transformer-based Language Model Language: English Learn more: GPT-NeoX-20B: An Open-Source Autoregressive Language Model. For details about the training dataset, see the Pile paper, and its data sheet. License: Apache 2.0 Contact: to ask questions about this model, join the EleutherAI Discord, and post them in #release-discussion. Please read the existing GPT-NeoX-20B documentation before asking about the model on Discord. For general correspondence: email@example.com . parrot_paraphraser_on_T5 model AI info Parrot is a paraphrase based utterance augmentation framework purpose built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. openai whisper model AI info https://openai.com/blog/whisper/ Whisper is an automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. We show that the use of such a large and diverse dataset leads to improved robustness to accents, background noise and technical language. Moreover, it enables transcription in multiple languages, as well as translation from those languages into English. We are open-sourcing models and inference code to serve as a foundation for building useful applications and for further research on robust speech processing.