KAG: Boosting LLMs in Professional Domains via Knowledge Augmented Generation
Paper: https://arxiv.org/pdf/2409.13731v3.pdf
Code: https://github.com/openspg/kag
Dataset: 2WikiMultiHopQA
🔸 @Machine_learn
Paper: https://arxiv.org/pdf/2409.13731v3.pdf
Code: https://github.com/openspg/kag
Dataset: 2WikiMultiHopQA
Please open Telegram to view this post
VIEW IN TELEGRAM
CogAgent: A Visual Language Model for GUI Agents
Paper: https://arxiv.org/pdf/2312.08914v3.pdf
CVPR 2024: http://openaccess.thecvf.com//content/CVPR2024/papers/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.pdf
Code1: https://github.com/thudm/cogvlm
Code2: https://github.com/digirl-agent/digirl
Code3: https://github.com/THUDM/CogAgent
Dataset: TextVQA
💠 @Machine_learn
Paper: https://arxiv.org/pdf/2312.08914v3.pdf
CVPR 2024: http://openaccess.thecvf.com//content/CVPR2024/papers/Hong_CogAgent_A_Visual_Language_Model_for_GUI_Agents_CVPR_2024_paper.pdf
Code1: https://github.com/thudm/cogvlm
Code2: https://github.com/digirl-agent/digirl
Code3: https://github.com/THUDM/CogAgent
Dataset: TextVQA
Please open Telegram to view this post
VIEW IN TELEGRAM
امشب اخرین فرصت برای مشارکت در این مقاله هستش...!🔸 🔸
Please open Telegram to view this post
VIEW IN TELEGRAM
Please open Telegram to view this post
VIEW IN TELEGRAM
# Install from PyPI
pip install outetts
# Interface Usage
import outetts
# Configure the model
model_config = outetts.HFModelConfig_v1(
model_path="OuteAI/OuteTTS-0.2-500M",
language="en", # Supported languages in v0.2: en, zh, ja, ko
)
# Initialize the interface
interface = outetts.InterfaceHF(model_version="0.2", cfg=model_config)
# Optional: Create a speaker profile (use a 10-15 second audio clip)
speaker = interface.create_speaker(
audio_path="path/to/audio/file",
transcript="Transcription of the audio file."
)
# Optional: Load speaker from default presets
interface.print_default_speakers()
speaker = interface.load_default_speaker(name="male_1")
output = interface.generate(
text="%Prompt Text%%.",
temperature=0.1,
repetition_penalty=1.1,
max_length=4096,
# Optional: Use a speaker profile
speaker=speaker,
)
# Save the synthesized speech to a file
output.save("output.wav")
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
# Install from PyPI
pip install neuzip
# Use Neuzip for Pytorch model
model: torch.nn.Module = # your model
+ manager = neuzip.Manager()
+ model = manager.convert(model)
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Papers
با عرض سلام پروژه Biopars رو شروع كرديم نفر ٥ ام از اين مقاله رو نياز داريم.
این کار تحت نظر استاد
Rex (Zhitao) Ying
انجام میشه.
link: https://scholar.google.com.au/citations?user=6fqNXooAAAAJ&hl=en
BioPars: a pre-trained biomedical large language model for persian biomedical text mining.
١- مراحل اوليه: جمع اوري متن هاي فارسي بيولوژيكي از منابع (...)
٢- پيش پردازش متن ها و تميز كردن متن ها
٣- اموزش ترنسفورمرها ي مورد نظر
٤- استفاده از بردارها ي اموزش داده شده در سه تسك (...)
هزينه سرور به ازاي هر ساعت ١.٢ دلار مي باشد. و حدود ٢ هزار ساعت براي اموزش مدل زباني نياز ميباشد.
دوستاني كه نياز دارن مي تونن به تيم ما اضافه بشن
🔸 🔸 🔸 🔸 🔸
@Raminmousa
این کار تحت نظر استاد
Rex (Zhitao) Ying
انجام میشه.
link: https://scholar.google.com.au/citations?user=6fqNXooAAAAJ&hl=en
BioPars: a pre-trained biomedical large language model for persian biomedical text mining.
١- مراحل اوليه: جمع اوري متن هاي فارسي بيولوژيكي از منابع (...)
٢- پيش پردازش متن ها و تميز كردن متن ها
٣- اموزش ترنسفورمرها ي مورد نظر
٤- استفاده از بردارها ي اموزش داده شده در سه تسك (...)
هزينه سرور به ازاي هر ساعت ١.٢ دلار مي باشد. و حدود ٢ هزار ساعت براي اموزش مدل زباني نياز ميباشد.
دوستاني كه نياز دارن مي تونن به تيم ما اضافه بشن
@Raminmousa
Please open Telegram to view this post
VIEW IN TELEGRAM
Machine learning books and papers
با عرض سلام پروژه Biopars رو شروع كرديم نفر ٥ ام از اين مقاله رو نياز داريم. این کار تحت نظر استاد Rex (Zhitao) Ying انجام میشه. link: https://scholar.google.com.au/citations?user=6fqNXooAAAAJ&hl=en BioPars: a pre-trained biomedical large language model…
هزینه نهایی برای این کار رو به ۲۵ میلیون کاهش دادیم برای نفر ۵ ...!🔥
Please open Telegram to view this post
VIEW IN TELEGRAM
Automating the Search for Artificial Life with Foundation Models
paper: https://arxiv.org/pdf/2412.17799v1.pdf
Code: https://github.com/sakanaai/asal
@Machine_learn
paper: https://arxiv.org/pdf/2412.17799v1.pdf
Code: https://github.com/sakanaai/asal
@Machine_learn
Please open Telegram to view this post
VIEW IN TELEGRAM
Forwarded from Papers
با عرض سلام مقاله زیر در مرحله major revision میباشد. نفر ۴ ام از این مقاله قابل اضافه کردن.
Abstract
Breast cancer stands as a prevalent cause of fatality among females on a global scale, with
prompt detection playing a pivotal role in diminishing mortality rates. The utilization of
ultrasound scans in the BUSI dataset for medical imagery pertaining to breast cancer has
exhibited commendable segmentation outcomes through the application of UNet and UNet++
networks. Nevertheless, a notable drawback of these models resides in their inattention towards
the temporal aspects embedded within the images. This research endeavors to enrich the
UNet++ architecture by integrating LSTM layers and self-attention mechanisms to exploit
temporal characteristics for segmentation purposes. Furthermore, the incorporation of a
Multiscale Feature Extraction Module aims to grasp varied scale features within the UNet++.
Through the amalgamation of our proposed methodology with data augmentation on the BUSI
with GT dataset, an accuracy rate of 98.88%, specificity of 99.53%, precision of 95.34%,
sensitivity of 91.20%, F1-score of 93.74, and Dice coefficient of 92.74% are achieved. These
findings demonstrate competitiveness with cutting-edge techniques outlined in existing
literature.
Keywords: Attention mechanisms, BUSI dataset, Deep Learning, Feature Extraction,
Multi-Scale features
دوستانی که نیاز دارن به ایدی بنده پیام بدن.
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Abstract
Breast cancer stands as a prevalent cause of fatality among females on a global scale, with
prompt detection playing a pivotal role in diminishing mortality rates. The utilization of
ultrasound scans in the BUSI dataset for medical imagery pertaining to breast cancer has
exhibited commendable segmentation outcomes through the application of UNet and UNet++
networks. Nevertheless, a notable drawback of these models resides in their inattention towards
the temporal aspects embedded within the images. This research endeavors to enrich the
UNet++ architecture by integrating LSTM layers and self-attention mechanisms to exploit
temporal characteristics for segmentation purposes. Furthermore, the incorporation of a
Multiscale Feature Extraction Module aims to grasp varied scale features within the UNet++.
Through the amalgamation of our proposed methodology with data augmentation on the BUSI
with GT dataset, an accuracy rate of 98.88%, specificity of 99.53%, precision of 95.34%,
sensitivity of 91.20%, F1-score of 93.74, and Dice coefficient of 92.74% are achieved. These
findings demonstrate competitiveness with cutting-edge techniques outlined in existing
literature.
Keywords: Attention mechanisms, BUSI dataset, Deep Learning, Feature Extraction,
Multi-Scale features
دوستانی که نیاز دارن به ایدی بنده پیام بدن.
@Raminmousa
@Machine_learn
https://www.tg-me.com/+SP9l58Ta_zZmYmY0
Telegram
Papers
در اين كانال قرار مقالاتي كه كار ميكنيم رو به اشتراك بزاريم.
قرار از هم حمايت كنيم و كارهاي جديدي
ارائه بديم
@Raminmousa
قرار از هم حمايت كنيم و كارهاي جديدي
ارائه بديم
@Raminmousa