Papers
يكي از بهترين موضوعات در طبقه بندي متن؛ تحليل احساس چند دامنه اي مي باشد. براي اين منظور مدلي تحت عنوان Title: TRCAPS: The Transformer-based Capsule Approach for Persian Multi- Domain Sentiment Analysis طراحي كرديم كه نتايج خيلي بهتري نسبت به IndCaps داشته…
فقط نفر دوم از این مقاله مونده...!
🔥1
LLM Engineer's Handbook: Master the art of engineering Large Language Models from concept to production.
🖥 Github
@Machine_learn
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3🔥1
This media is not supported in your browser
VIEW IN TELEGRAM
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
Forwarded from Github LLMs
В семействе 2 модели:
# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2
# Install the repository & accelerate:
pip install -e .
pip install accelerate
# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)
user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)
input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))
https://www.tg-me.com/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
👍5❤1
با عرض سلام امروز اخرين وقت براي مشاركت در اين مقاله مي باشد...!
👍1
# install Diffusers
pip install -U diffusers
# Inference
import torch
from diffusers import StableDiffusion3Pipeline
pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")
image = pipe(
"A happy woman laying on a grass",
num_inference_steps=28,
guidance_scale=3.5,
).images[0]
image.save("woman.png")
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
👍2🔥2
Expanse 8B Transformers :
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-expanse-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format the message with the chat template
messages = [{"role": "user", "content": " %prompt% "}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>%prompt%<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM