Telegram Web Link
📑 Nine quick tips for open meta-analyses


📎 Study the paper

@Machine_learn
Algebraic topology for physicists

📓 Book

@Machine_learn
✔️ LVD-2M: A Long-take Video Dataset with Temporally Dense Captions

New pipeline for selecting high-quality long-take videos and generating temporally dense captions.

Dataset with four key features essential for training long video generation models: (1) long videos covering at least 10 seconds, (2) long-take videos without cuts, (3) large motion and diverse contents, and (4) temporally dense captions.

🖥 Github: https://github.com/silentview/lvd-2m

📕 Paper: https://arxiv.org/abs/2410.10816v1

🖥 Dataset: https://paperswithcode.com/dataset/howto100m

🔸@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Github LLMs
🔥 NVIDIA silently release a Llama 3.1 70B fine-tune that outperforms
GPT-4o and Claude Sonnet 3.5


Llama 3.1 Nemotron 70B Instruct a further RLHFed model on
huggingface


https://huggingface.co/collections/nvidia/llama-31-nemotron-70b-670e93cd366feea16abc13d8
https://www.tg-me.com/deep_learning_proj
Please open Telegram to view this post
VIEW IN TELEGRAM
Prompt Engineering Techniques: Comprehensive Repository for Development and Implementation 🖋️

📓 Github

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
🌟 Zamba2-Instruct

🟢Zamba2-1.2B-instruct;
🟠Zamba2-2.7B-instruct.


# Clone repo
git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2

# Install the repository & accelerate:
pip install -e .
pip install accelerate

# Inference:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba2-2.7B-instruct")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba2-2.7B-instruct", device_map="cuda", torch_dtype=torch.bfloat16)

user_turn_1 = "user_prompt1."
assistant_turn_1 = "assistant_prompt."
user_turn_2 = "user_prompt2."
sample = [{'role': 'user', 'content': user_turn_1}, {'role': 'assistant', 'content': assistant_turn_1}, {'role': 'user', 'content': user_turn_2}]
chat_sample = tokenizer.apply_chat_template(sample, tokenize=False)

input_ids = tokenizer(chat_sample, return_tensors='pt', add_special_tokens=False).to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=150, return_dict_in_generate=False, output_scores=False, use_cache=True, num_beams=1, do_sample=False)
print((tokenizer.decode(outputs[0])))


🖥GitHub


@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
📄 Advances of Artificial Intelligence in Anti-Cancer Drug Design: A Review of the Past Decade



📎 Study the paper

@Machine_learn
Forwarded from Papers
يكي از بهترين موضوعات در طبقه بندي متن؛ تحليل احساس چند دامنه اي مي باشد. براي اين منظور مدلي تحت عنوان
Title: TRCAPS: The Transformer-based Capsule Approach for Persian Multi-
Domain Sentiment Analysis
طراحي كرديم كه نتايج خيلي بهتري نسبت به IndCaps داشته است.
دوستاني كه نياز به مقاله تو حوزه NLP دارن مي تونن تا اخر اين هفته داخل اين مقاله شركت كنند.

ژورنال هدف Array elsevier مي باشد.

شركت كنندگان داخل اين مقاله نياز به انجام تسك هايي نيز مي باشند.

@Raminmousa
@Machine_learn
@Paper4money
Linear Algebra Done Right

📓 Book

@Machine_learn
💡 Ultimate Guide to Fine-Tuning LLMs

📚 link

@Machine_learn
LLM Engineer's Handbook: Master the art of engineering Large Language Models from concept to production.

🖥 Github

@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
فقط نفر ۲ و ۴ از این باقی مونده ....!
Please open Telegram to view this post
VIEW IN TELEGRAM
📑 A guide to RNA sequencing and functional analysis


📎 Study the paper

@Machine_learn
The State of AI Report

📚 Report

@Machine_learn
NotebookLlama: An Open Source version of NotebookLM

📚 Book

@Machine_learn
2024/11/16 13:02:27
Back to Top
HTML Embed Code: