Telegram Web Link
Linear Algebra Done Right

📓 Book

@Machine_learn
4👍4
💡 Ultimate Guide to Fine-Tuning LLMs

📚 link

@Machine_learn
👍2
LLM Engineer's Handbook: Master the art of engineering Large Language Models from concept to production.

🖥 Github

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3🔥1
فقط نفر ۲ و ۴ از این باقی مونده ....!
Please open Telegram to view this post
VIEW IN TELEGRAM
👍3
📑 A guide to RNA sequencing and functional analysis


📎 Study the paper

@Machine_learn
👍41
The State of AI Report

📚 Report

@Machine_learn
👍2
NotebookLlama: An Open Source version of NotebookLM

📚 Book

@Machine_learn
5
Tutorial on Diffusion Models for Imaging and Vision

📚 Book

@Machine_learn
5👍2
An Infinite Descent into Pure Mathematics

📚 Book

@Machine_learn
👍31
Forwarded from Github LLMs
🌟 Zamba2-Instruct

В семействе 2 модели:

🟢Zamba2-1.2B-instruct;
🟠Zamba2-2.7B-instruct.



# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2

# Install the repository & accelerate:
pip install -e .
pip install accelerate

# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)

user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))





🖥GitHub

https://www.tg-me.com/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
👍51
📕 Applied Causal #Inference Powered by #MachineLearning

📌Book

@Machine_learn
👍2
THINKING LLMS: GENERAL INSTRUCTION FOLLOWING WITH THOUGHT GENERATION

📚 Reed

@Machine_learn
👍1
با عرض سلام امروز اخرين وقت براي مشاركت در اين مقاله مي باشد...!
👍1
⚡️ Stable Diffusion 3.5 Large.

# install Diffusers
pip install -U diffusers


# Inference
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")

image = pipe(
"A happy woman laying on a grass",
num_inference_steps=28,
guidance_scale=3.5,
).images[0]
image.save("woman.png")





🟡Arxiv



@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
👍2🔥2
🌟 Aya Expanse


🟢Aya Expanse 32B
🟢Aya Expanse 8B


🟠Aya Expanse 32B-GGUF
🟠Aya Expanse 8B-GGUF

Expanse 8B Transformers :

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/aya-expanse-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format the message with the chat template
messages = [{"role": "user", "content": " %prompt% "}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>%prompt%<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>

gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)





🟡GGUF 32B
🟡GGUF 8B
🟡Demo


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
2025/07/09 12:12:41
Back to Top
HTML Embed Code: