Understanding the Forecast Statistics and Four Moments (4P).pdf
181.8 KB
Statistical Moments (M1, M2) for Data Analysis
Here are 5 curated PDFs diving into the mean (M1), variance (M2), and their applications in crafting research questions and sourcing data.
A channel member requested resources on this topic and we delivered.
If you have a topic you want resources on let us know, and we’ll make it happen!
@datascience_bds
Here are 5 curated PDFs diving into the mean (M1), variance (M2), and their applications in crafting research questions and sourcing data.
A channel member requested resources on this topic and we delivered.
If you have a topic you want resources on let us know, and we’ll make it happen!
@datascience_bds
❤8
📚 Data Science Riddle
Why do we use Batch Normalization?
Why do we use Batch Normalization?
Anonymous Quiz
28%
Speeds up training
42%
Prevents overfitting
9%
Adds non-linearity
20%
Reduces dataset size
❤4
📚 Data Science Riddle
Your object detection model misses small objects. Easiest fix?
Your object detection model misses small objects. Easiest fix?
Anonymous Quiz
20%
Use larger input images
31%
Add more classes
35%
Reduce learning rate
14%
Train longer
🤖 AI that creates AI: ASI-ARCH finds 106 new SOTA architectures
ASI-ARCH — experimental ASI that autonomously researches and designs neural nets. It hypothesizes, codes, trains & tests models.
💡 Scale:
1,773 experiments → 20,000+ GPU-hours.
Stage 1 (20M params, 1B tokens): 1,350 candidates beat DeltaNet.
Stage 2 (340M params): 400 models → 106 SOTA winners.
Top 5 trained on 15B tokens vs Mamba2 & Gated DeltaNet.
📊 Results:
PathGateFusionNet: 48.51 avg (Mamba2: 47.84, Gated DeltaNet: 47.32).
BoolQ: 60.58 vs 60.12 (Gated DeltaNet).
Consistent gains across tasks.
🔍 Insights:
Prefers proven tools (gating, convs), refines them iteratively.
Ideas come from: 51.7% literature, 38.2% self-analysis, 10.1% originality.
SOTA share: self-analysis ↑ to 44.8%, literature ↓ to 48.6%.
@datascience_bds
ASI-ARCH — experimental ASI that autonomously researches and designs neural nets. It hypothesizes, codes, trains & tests models.
💡 Scale:
1,773 experiments → 20,000+ GPU-hours.
Stage 1 (20M params, 1B tokens): 1,350 candidates beat DeltaNet.
Stage 2 (340M params): 400 models → 106 SOTA winners.
Top 5 trained on 15B tokens vs Mamba2 & Gated DeltaNet.
📊 Results:
PathGateFusionNet: 48.51 avg (Mamba2: 47.84, Gated DeltaNet: 47.32).
BoolQ: 60.58 vs 60.12 (Gated DeltaNet).
Consistent gains across tasks.
🔍 Insights:
Prefers proven tools (gating, convs), refines them iteratively.
Ideas come from: 51.7% literature, 38.2% self-analysis, 10.1% originality.
SOTA share: self-analysis ↑ to 44.8%, literature ↓ to 48.6%.
@datascience_bds
❤4
🚀 Databricks Tip: REPLACE vs MERGE
When updating Delta tables, you’ve got two powerful options:
🔹 REPLACE TABLE … ON
📚 Like throwing away the entire library and rebuilding it.
- Drops the old table & recreates it.
- Schema + data = fully replaced.
- ⚡ Super fast but destructive (old data gone).
- ✅ Best for full refreshes or schema changes.
🔹 MERGE
📖 Like updating only the books that changed.
- Works row by row.
- Updates, inserts, or deletes specific records.
- 🔍 Preserves unchanged data.
- ✅ Best for incremental updates or CDC (Change Data Capture).
⚖️ Key Difference
- REPLACE = Start fresh with a new table.
- MERGE = Surgically update rows without losing the rest.
👉 Rule of thumb:
Use REPLACE for full rebuilds,
Use MERGE for incremental upserts.
#Databricks #DeltaLake
When updating Delta tables, you’ve got two powerful options:
🔹 REPLACE TABLE … ON
📚 Like throwing away the entire library and rebuilding it.
- Drops the old table & recreates it.
- Schema + data = fully replaced.
- ⚡ Super fast but destructive (old data gone).
- ✅ Best for full refreshes or schema changes.
🔹 MERGE
📖 Like updating only the books that changed.
- Works row by row.
- Updates, inserts, or deletes specific records.
- 🔍 Preserves unchanged data.
- ✅ Best for incremental updates or CDC (Change Data Capture).
⚖️ Key Difference
- REPLACE = Start fresh with a new table.
- MERGE = Surgically update rows without losing the rest.
👉 Rule of thumb:
Use REPLACE for full rebuilds,
Use MERGE for incremental upserts.
#Databricks #DeltaLake
❤4
📚 Data Science Riddle
You have messy CSVs arriving daily. What's your first production step?
You have messy CSVs arriving daily. What's your first production step?
Anonymous Quiz
8%
Train model right away
16%
Manually clean each file
58%
Automate data validation pipeline
19%
Combine all into one CSV
Feature Engineering: The Hidden Skill That Makes or Breaks ML Models
Most people chase better algorithms. Professionals chase better features.
Because no matter how fancy your model is, if the data doesn’t speak the right language. it won’t learn anything meaningful.
🔍 So What Exactly Is Feature Engineering?
It’s not just cleaning data. It’s translating raw, messy reality into something your model can understand.
You’re basically asking:
Example:
➖ “Date of birth” → Age (time-based insight)
➖ “Text review” → Sentiment score (emotional signal)
➖ “Price” → log(price) (stabilized distribution)
Every transformation teaches your model how to see the world more clearly.
⚙️ Why It Matters More Than the Model
You can’t outsmart bad features.
A simple linear model trained on smartly engineered data will outperform a deep neural net trained on noise.
Kaggle winners know this. They spend 80% of their time creating and refining features not tuning hyperparameters.
Why? Because models don’t create intelligence, They extract it from what you feed them.
🧩 The Core Idea: Add Signal, Remove Noise
Feature engineering is about sculpting your data so patterns stand out.
You do that by:
✔️ Transforming data (scale, encode, log).
✔️ Creating new signals (ratios, lags, interactions).
✔️ Reducing redundancy (drop correlated or useless columns).
Every step should make learning easier not prettier.
⚠️ Beware of Data Leakage
Here’s the silent trap: using future information when building features.
For example, when predicting loan default, if you include “payment status after 90 days,” your model will look brilliant in training and fail in production.
Golden rule:
👉 A feature is valid only if it’s available at prediction time.
🧠 Think Like a Domain Expert
Anyone can code transformations.
But great data scientists understand context.
They ask:
❔What actually influences this outcome in real life?
❔How can I capture that influence as a feature?
When you merge domain intuition with technical precision, feature engineering becomes your superpower.
⚡️ Final Takeaway
The model is the student.
The features are the teacher.
And no matter how capable the student if the teacher explains things poorly, learning fails.
Most people chase better algorithms. Professionals chase better features.
Because no matter how fancy your model is, if the data doesn’t speak the right language. it won’t learn anything meaningful.
🔍 So What Exactly Is Feature Engineering?
It’s not just cleaning data. It’s translating raw, messy reality into something your model can understand.
You’re basically asking:
“How can I represent the real world in numbers, without losing its meaning?”
Example:
➖ “Date of birth” → Age (time-based insight)
➖ “Text review” → Sentiment score (emotional signal)
➖ “Price” → log(price) (stabilized distribution)
Every transformation teaches your model how to see the world more clearly.
⚙️ Why It Matters More Than the Model
You can’t outsmart bad features.
A simple linear model trained on smartly engineered data will outperform a deep neural net trained on noise.
Kaggle winners know this. They spend 80% of their time creating and refining features not tuning hyperparameters.
Why? Because models don’t create intelligence, They extract it from what you feed them.
🧩 The Core Idea: Add Signal, Remove Noise
Feature engineering is about sculpting your data so patterns stand out.
You do that by:
✔️ Transforming data (scale, encode, log).
✔️ Creating new signals (ratios, lags, interactions).
✔️ Reducing redundancy (drop correlated or useless columns).
Every step should make learning easier not prettier.
⚠️ Beware of Data Leakage
Here’s the silent trap: using future information when building features.
For example, when predicting loan default, if you include “payment status after 90 days,” your model will look brilliant in training and fail in production.
Golden rule:
👉 A feature is valid only if it’s available at prediction time.
🧠 Think Like a Domain Expert
Anyone can code transformations.
But great data scientists understand context.
They ask:
❔What actually influences this outcome in real life?
❔How can I capture that influence as a feature?
When you merge domain intuition with technical precision, feature engineering becomes your superpower.
⚡️ Final Takeaway
The model is the student.
The features are the teacher.
And no matter how capable the student if the teacher explains things poorly, learning fails.
Feature engineering isn’t preprocessing. It’s the art of teaching your model how to understand the world.
❤8
📚 Data Science Riddle
You train a CNN for image classification but loss stops decreasing early. What's your next step?
You train a CNN for image classification but loss stops decreasing early. What's your next step?
Anonymous Quiz
20%
Reduce batch size
39%
Increase learning rate a bit
23%
Add Dropout
18%
Reduce layers
🤝1
⚡ Parallelism In Databricks ⚡
1️⃣ DEFINITION
Parallelism = running many tasks 🏃♂️🏃♀️ at the same time
(instead of one by one 🐢).
In Databricks (via Apache Spark), data is split into
📦 partitions, and each partition is processed
simultaneously across worker nodes 💻💻💻.
2️⃣ KEY CONCEPTS
🔹 Partition = one chunk of data 📦
🔹 Task = work done on a partition 🛠️
🔹 Stage = group of tasks that run in parallel ⚙️
🔹 Job = complete action (made of stages + tasks) 📊
3️⃣ HOW IT WORKS
✅ Step 1: Dataset ➡️ divided into partitions 📦📦📦
✅ Step 2: Each partition ➡️ assigned to a worker 💻
✅ Step 3: Workers run tasks in parallel ⏩
✅ Step 4: Results ➡️ combined into final output 🎯
4️⃣ EXAMPLES
# Increase parallelism by repartitioning
df = spark.read.csv("/data/huge_file.csv")
df = df.repartition(200) # ⚡ 200 parallel tasks
# Spark DataFrame ops run in parallel by default 🚀
result = df.groupBy("category").count()
# Parallelize small Python objects 📂
rdd = spark.sparkContext.parallelize(range(1000), numSlices=50)
rdd.map(lambda x: x * 2).collect()
# Parallel workflows in Jobs UI ⚡
# Independent tasks = run at the same time.
5️⃣ BEST PRACTICES
⚖️ Balance partitions → not too few, not too many
📉 Avoid data skew → partitions should be even
🗃️ Cache data if reused often
💪 Scale cluster → more workers = more parallelism
====================================================
📌 SUMMARY
Parallelism in Databricks = split data 📦 →
assign tasks 🛠️ → run them at the same time ⏩ →
faster results 🚀
1️⃣ DEFINITION
Parallelism = running many tasks 🏃♂️🏃♀️ at the same time
(instead of one by one 🐢).
In Databricks (via Apache Spark), data is split into
📦 partitions, and each partition is processed
simultaneously across worker nodes 💻💻💻.
2️⃣ KEY CONCEPTS
🔹 Partition = one chunk of data 📦
🔹 Task = work done on a partition 🛠️
🔹 Stage = group of tasks that run in parallel ⚙️
🔹 Job = complete action (made of stages + tasks) 📊
3️⃣ HOW IT WORKS
✅ Step 1: Dataset ➡️ divided into partitions 📦📦📦
✅ Step 2: Each partition ➡️ assigned to a worker 💻
✅ Step 3: Workers run tasks in parallel ⏩
✅ Step 4: Results ➡️ combined into final output 🎯
4️⃣ EXAMPLES
# Increase parallelism by repartitioning
df = spark.read.csv("/data/huge_file.csv")
df = df.repartition(200) # ⚡ 200 parallel tasks
# Spark DataFrame ops run in parallel by default 🚀
result = df.groupBy("category").count()
# Parallelize small Python objects 📂
rdd = spark.sparkContext.parallelize(range(1000), numSlices=50)
rdd.map(lambda x: x * 2).collect()
# Parallel workflows in Jobs UI ⚡
# Independent tasks = run at the same time.
5️⃣ BEST PRACTICES
⚖️ Balance partitions → not too few, not too many
📉 Avoid data skew → partitions should be even
🗃️ Cache data if reused often
💪 Scale cluster → more workers = more parallelism
====================================================
📌 SUMMARY
Parallelism in Databricks = split data 📦 →
assign tasks 🛠️ → run them at the same time ⏩ →
faster results 🚀
❤2
📚 Data Science Riddle
In A/B testing, why is random assignment of users essential?
In A/B testing, why is random assignment of users essential?
Anonymous Quiz
6%
To reduce experiment time
83%
To ensure groups are unbiased
8%
To increase conversion rate
4%
To simplify analysis
