Telegram Web Link
Please open Telegram to view this post
VIEW IN TELEGRAM
🌟 Zamba2-Instruct

🟢Zamba2-1.2B-instruct;
🟠Zamba2-2.7B-instruct.


# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2

# Install the repository & accelerate:
pip install -e .
pip install accelerate

# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)

user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))


🖥GitHub


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
📄 Advances of Artificial Intelligence in Anti-Cancer Drug Design: A Review of the Past Decade



📎 Study the paper

@Machine_learn
Forwarded from Papers
يكي از بهترين موضوعات در طبقه بندي متن؛ تحليل احساس چند دامنه اي مي باشد. براي اين منظور مدلي تحت عنوان
Title: TRCAPS: The Transformer-based Capsule Approach for Persian Multi-
Domain Sentiment Analysis
طراحي كرديم كه نتايج خيلي بهتري نسبت به IndCaps داشته است.
دوستاني كه نياز به مقاله تو حوزه NLP دارن مي تونن تا اخر اين هفته داخل اين مقاله شركت كنند.

ژورنال هدف Array elsevier مي باشد.

شركت كنندگان داخل اين مقاله نياز به انجام تسك هايي نيز مي باشند.

@Raminmousa
@Machine_learn
@Paper4money
Linear Algebra Done Right

📓 Book

@Machine_learn
💡 Ultimate Guide to Fine-Tuning LLMs

📚 link

@Machine_learn
LLM Engineer's Handbook: Master the art of engineering Large Language Models from concept to production.

🖥 Github

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
فقط نفر ۲ و ۴ از این باقی مونده ....!
Please open Telegram to view this post
VIEW IN TELEGRAM
📑 A guide to RNA sequencing and functional analysis


📎 Study the paper

@Machine_learn
The State of AI Report

📚 Report

@Machine_learn
NotebookLlama: An Open Source version of NotebookLM

📚 Book

@Machine_learn
Tutorial on Diffusion Models for Imaging and Vision

📚 Book

@Machine_learn
An Infinite Descent into Pure Mathematics

📚 Book

@Machine_learn
Forwarded from Github LLMs
🌟 Zamba2-Instruct

В семействе 2 модели:

🟢Zamba2-1.2B-instruct;
🟠Zamba2-2.7B-instruct.



# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2

# Install the repository & accelerate:
pip install -e .
pip install accelerate

# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)

user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))





🖥GitHub

https://www.tg-me.com/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
2024/11/16 13:49:51
Back to Top
HTML Embed Code: