Telegram Web Link
Tutorial on Diffusion Models for Imaging and Vision

📚 Book

@Machine_learn
An Infinite Descent into Pure Mathematics

📚 Book

@Machine_learn
Forwarded from Github LLMs
🌟 Zamba2-Instruct

В семействе 2 модели:

🟢Zamba2-1.2B-instruct;
🟠Zamba2-2.7B-instruct.



# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2

# Install the repository & accelerate:
pip install -e .
pip install accelerate

# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)

user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))





🖥GitHub

https://www.tg-me.com/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
THINKING LLMS: GENERAL INSTRUCTION FOLLOWING WITH THOUGHT GENERATION

📚 Reed

@Machine_learn
با عرض سلام امروز اخرين وقت براي مشاركت در اين مقاله مي باشد...!
⚡️ Stable Diffusion 3.5 Large.

# install Diffusers
pip install -U diffusers


# Inference
import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-large", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")

image = pipe(
"A happy woman laying on a grass",
num_inference_steps=28,
guidance_scale=3.5,
).images[0]
image.save("woman.png")





🟡Arxiv



@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
🌟 Aya Expanse


🟢Aya Expanse 32B
🟢Aya Expanse 8B


🟠Aya Expanse 32B-GGUF
🟠Aya Expanse 8B-GGUF

Expanse 8B Transformers :

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "CohereForAI/aya-expanse-8b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# Format the message with the chat template
messages = [{"role": "user", "content": " %prompt% "}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>%prompt%<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>

gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)

gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)





🟡GGUF 32B
🟡GGUF 8B
🟡Demo


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
SAM2Long: Enhancing SAM 2 for Long Video Segmentation with a Training-Free Memory Tree

🖥 Github: https://github.com/mark12ding/sam2long

📕 Paper: https://arxiv.org/abs/2410.16268v1

🤗 HF: https://huggingface.co/papers/2410.16268

@Machine_learn
Forwarded from Papers
💠Title:BERTCaps: BERT Capsule for persian Multi-domain Sentiment Analysis.

🔺Abstract:
Sentiment classification is widely known as a domain-dependent problem. In order to learn an accurate domain-specific sentiment classifier, a large number of labeled samples are needed, which are expensive and time-consuming to annotate. Multi-domain sentiment analysis based on multi-task learning can leverage labeled samples in each single domain, which can alleviate the need for large amount of labeled data in all domains. In this article, the purpose is BERTCaps to provide a multi-domain classifier. In this model, BERT was used for Instance Representation and Capsule was used for instance learning. In the evaluation dataset, the model was able to achieve an accuracy of 0.9712 in polarity classification and an accuracy of 0.8509 in domain classification.

journal: https://www.sciencedirect.com/journal/array
If:2.3

جايگاه ٢ و ٤ اين مقاله رو نياز داريم.
دوستاني كه مايل به شركت هستن مي تونن به ايدي بنده پيام بدن.
@Raminmousa
@Paper4money
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Ms - SmolLM2 1.7B - beats Qwen 2.5 1.5B & Llama 3.21B, Apache 2.0 licensed, trained on 11 Trillion tokens 🔥

> 135M, 360M, 1.7B parameter model
> Trained on FineWeb-Edu, DCLM, The Stack, along w/ new mathematics and coding datasets
> Specialises in Text rewriting, Summarization & Function Calling
> Integrated with transformers & model on the hub!

You can run the 1.7B in less than 2GB VRAM on a Q4 👑

Fine-tune, run inference, test, train, repeat - intelligence is just 5 lines of code away!

https://huggingface.co/collections/HuggingFaceTB/smollm2-6723884218bcda64b34d7db9

@Machine_learn
📑A Survey of Deep Learning Methods for Estimating the Accuracy of Protein Quaternary Structure Models



📎 Study the paper

@Machine_learn
Data Pipelines with Apache Airflow

📘 book

@Machine_learn
Forwarded from Github LLMs
📖 LLM-Agent-Paper-List is a repository of papers on the topic of agents based on large language models (LLM)! The papers are divided into categories such as LLM agent architectures, autonomous LLM agents, reinforcement learning (RL), natural language processing methods, multimodal approaches and tools for developing LLM agents, and more.

🖥 Github

https://www.tg-me.com/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
👩‍💻 Python Notes for Professionals book

🔗 Book

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Papers
الحمدالله تو اين بازه ٣ ماه تونستيم مقالات مشاركتي رو تحت وظايف زير انجام بديم:
🔹ثبت ٤ مقاله در حوزه Multi-modal wond classification

🔹ارائه ی دو مقاله در حوزه ی breast cancer segmentation

🔹 ارائه ی سه مقاله در حوزه ی cancer detection
که ۸۰٪ مراحل این مقالات هم تموم شده.

به زودی پس از اتمام این مقالات لیستی از مقالات مشارکتی رو خواهیم داشت .

https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Please open Telegram to view this post
VIEW IN TELEGRAM
فقط جایگاه دوم از این مقاله باقی مونده
2024/11/16 09:51:51
Back to Top
HTML Embed Code: