Vandan V2Vandan V2 is an automated trading strategy for NQ1! (E-mini Nasdaq-100) based on short-term mean reversion with dynamic risk control. It combines volatility filters and overbought/oversold signals to capture local market imbalances.
Backtested from 2015 to 2025, it achieved a +730% total return, Profit Factor of 1.40, max drawdown of only 1.61%, and over 106,000 trades. Designed for systematic scalping or intraday arbitrage with a limit of 3 simultaneous contracts.
Statistics
Intraday Intensity Percent (IIP) by CoryP1990 – Quant ToolkitThe Intraday Intensity Percent (IIP) quantifies buying vs. selling pressure within each bar by combining price position inside the range and trading volume. It’s essentially a volume-weighted order-flow indicator, showing whether volume concentrates near highs (buying pressure) or lows (selling pressure).
 How it works 
Computes the Intraday Intensity (II) = ((Close − Low) − (High − Close)) / (High − Low) × Volume.
Then compares total “intensity” to total volume over a look-back window to produce a normalized percentage.
Lime line: IIP rising → accumulation / increasing buy pressure.
Red line: IIP falling → distribution / increasing sell pressure.
Background: Green tint = heavy buying, Red tint = heavy selling.
 Use cases 
Identify accumulation or distribution phases early.
Confirm momentum with volume-backed pressure.
Detect divergences between price and volume flow.
 Defaults 
Length = 14
High-pressure threshold = +5 %
Low-pressure threshold = −5 %
 Example — AAPL (2H) 
Late July into early August shows sustained distribution as IIP sinks below −5% (deep red), marking heavy sell pressure during the drop. From early to mid-August, IIP flips positive and holds > +5% (green background), aligning with the rebound. After a brief mid-September shakeout, late Sep–mid Oct features renewed accumulation with repeated green surges. Most recently, IIP prints around −33%, indicating dominant selling pressure into the latest two-hour bars.
 Part of the Quant Toolkit — transparent, open-source indicators for modern quantitative analysis. Built by CoryP1990.
Intraday Perpetual Premium & Z-ScoreThis indicator measures the real-time premium of a perpetual futures contract relative to its spot market and interprets it through a statistical lens.
It helps traders detect when funding pressure is building, when leverage is being unwound, and when crowding in the futures market may precede volatility.
How it works
	•	Premium (%) = (Perp – Spot) ÷ Spot × 100
The script fetches both spot and perpetual prices and calculates their percentage difference each minute.
	•	Rolling Mean & Z-Score
Over a 4-hour look-back, it computes the average premium and standard deviation to derive a Z-Score, showing how stretched current sentiment is.
	•	Dynamic ±2σ Bands highlight statistically extreme premiums or discounts.
	•	Rate of Change (ROC) over one hour gauges the short-term directional acceleration of funding flows.
Colour & Label Interpretation
Visual cue	Meaning	Trading Implication
🟢 Green bars + “BULL Pressure”	Premium rising faster than mean	Leverage inflows → momentum strengthening
🔴 Red bars + “BEAR Pressure”	Premium shrinking	Leverage unwind → pull-back or consolidation
⚠️ Orange “EXTREME Premium/Discount”		Crowded trade → heightened reversal risk
⚪ Grey bars	Neutral	Balanced conditions
Alerts
	•	Bull Pressure Alert → funding & premium rising (momentum building)
	•	Bear Pressure Alert → premium falling (deleveraging)
	•	Extreme Premium Alert → crowded longs; potential top
	•	Extreme Discount Alert → capitulation; possible bottom
Use case
Combine this indicator with your Heikin-Ashi, RSI, and MACD confluence rules:
	•	Enter only when your oscillators are low → curling up and Bull Pressure triggers.
	•	Trim or exit when Bear Pressure or Extreme Premium appears.
	•	Watch for Extreme Discount during flushes as an early bottoming clue.
Custom Horizontal Lines | Trade Symmetry📊 Custom Horizontal Lines
🔍 Overview
The Custom Horizontal Lines is a precision utility designed for traders who perform manual higher-timeframe analysis and want to preserve their marked price levels directly on the chart.
It doesn’t calculate or detect anything automatically — instead, it acts as your personal level memory, preserving your analyzed zones and reference prices throughout the session.
Ideal for traders who manually mark the High, Low, Open, Close, Mean Thresholds, and Quarter Levels of Order Blocks, Fair Value Gaps, Inversion Fair Value Gaps and Wicks before the trading day begins.
⚙️ Key Features
✅ Manual Level Entry — Input your analyzed price levels (OB, FVG, WICK,etc) directly into the indicator settings.
✅ Preserved Levels — Once entered, your lines stay visible and consistent — even after switching symbols, timeframes, or reloading the chart.
✅ Supports All Level Types — Store any kind of manually defined level: OB highs/lows, FVG boundaries, Wicks, Mean Thresholds, Quarter levels, or custom reference prices.
✅ Clean Visualization — Customize line color, style, and labels for easy visual organization.
✅ Session-Ready Workflow — Built for pre-market preparation — enter your HTF levels once, and trade around them all day.
✅ No Auto Calculations — 100% manual by design — ensuring only your analyzed levels are shown, exactly as you defined them.
💡 How to Use
Open the indicator’s settings and manually enter those price values.
The indicator will plot and preserve those exact levels on your chart.
Switch to your lower timeframe and observe how price reacts around them — without ever needing to redraw.
🎯 Why It’s Useful
Keeps your HTF levels organized and persistent across sessions.
Saves time by avoiding redrawing.
Fits perfectly into ICT / Smart Money  trading workflows.
Ensures full manual control and precision over what’s displayed on your chart.
🧩 Ideal For
ICT and Smart Money traders
Institutional-style manual analysts
Traders marking Mean Thresholds, or Quarter Levels of OBs, FVGs, Wicks etc
Anyone who wants a clean, reliable way to preserve their manual analysis
Yang-Zhang Volatility (YZVol) by CoryP1990 – Quant ToolkitThe Yang-Zhang Volatility (YZVol) estimator measures realized volatility using both overnight gaps and intraday moves. It combines three components: overnight returns, open-to-close returns, and the Rogers–Satchell term, weighted by Zhang’s k to reduce bias.
 How to read it 
Line color: Green when YZVol is rising (volatility expansion), Red when falling (volatility compression).
Background: Green tint = above High-vol threshold (active regime). Red tint = below Low-vol threshold (quiet regime).
Units: Displays Daily % by default on any timeframe (values are normalized to daily). An optional toggle shows Annualized % (√252 × Daily %).
 Typical uses 
Spot transitions between quiet and active regimes.
Compare realized vol vs implied vol or a risk-target.
Adapt position sizing to volatility clustering.
 Defaults 
Length = 20
High-vol threshold = 5% (Daily)
Low-vol threshold = 1% (Daily)
Optional: Annualized % display
 Example — SPY (1D) 
During the 2020 crash, YZVol surged to 5.8 % per day, capturing the height of pandemic-era volatility before compressing into a calm regime through 2021. Volatility re-expanded in 2022 due to reinflamed COVID fears and gradually stabilized through 2023. A sharp, liquidity-driven volatility event in August 2024 caused another brief YZVol surge, reflecting the historic one-day VIX spike triggered by market-wide risk-off flows and thin pre-market liquidity. A second, policy-driven expansion followed in April–May 2025, coinciding with the renewed U.S.–China tariff conflict and a sharp equity pullback. Since mid-2025, YZVol has settled near 1 % per day, with the red background confirming that realized volatility has once again compressed into a quiet, low-risk regime.
 Part of the Quant Toolkit — transparent, open-source indicators for modern quantitative analysis. Built by CoryP1990.
India Vix based Strangle StrikesA clean Nifty–VIX dashboard that converts India VIX into expected daily moves, price ranges, and suggested strangle strikes. Includes VIX %, expanded 1.2× range, and smart rounded strike levels for options trading.
This script provides a professional on-chart dashboard that converts India VIX into actionable trading levels for Nifty. It calculates the VIX-based expected daily move, projected price ranges, expanded 1.2× ranges, and suggested strangle strike prices. Includes clean formatting, color-coded sections, and real-time updates.
Ideal for traders using straddles, strangles, intraday volatility models, range-bound setups, and options-based risk management.
1.2x expanded range is better success probability, may keep 20% of strangle value as stop loss. 
The vix based system is intended to give approx. 70%+ success rate.
Ulcer Index (UI) by CoryP1990 – Quant ToolkitThe Ulcer Index measures downside volatility, i.e. how deep and persistent drawdowns are from recent highs. Unlike standard deviation, which treats upside and downside equally, the Ulcer Index focuses purely on  pain . It’s a favorite of risk-adjusted performance metrics like the Martin Ratio.
 How it works 
Computes the RMS (root-mean-square) of drawdowns over a look-back window.
Rising UI → drawdowns worsening (stress increasing).
Falling UI → drawdowns shrinking (recovery phase).
Red line = Ulcer Index rising.
Lime line = Ulcer Index falling.
Red background = High-risk regime (above threshold).
Green background = Low-risk regime (below threshold).
 Use cases 
Gauge portfolio stress levels and timing of recovery phases.
Identify “calm vs storm” periods for position sizing.
Combine with volatility or sentiment measures for regime classification.
 Defaults 
Length = 14
High-risk threshold = 10
Low-risk threshold = 5
 Example — NVIDIA (NVDA, 1D) 
During the sharp decline through 2022, the Ulcer Index repeatedly spiked above 10 while the background turned red, highlighting an extended high-stress drawdown phase. As NVDA began recovering in early 2023, the UI line switched to lime and drifted below 5, marking a transition into a low-risk regime. Throughout 2024–2025, the index stayed mostly sub-5 with brief red pulses on minor corrections, which is clear evidence that downside volatility has remained contained during the broader uptrend.
 Part of the Quant Toolkit - a series of transparent, open-source indicators designed for professional-grade analytics and education. Built by CoryP1990.
korea time with 200 korea time
start time 
08
09
17
18
23
00
This script makes it easier to look at the charts
The time automatically displays even if you don't bother to bring the mouse by hand
Now you can see the time intuitively
Run a very happy trading session
JiNFOJiNFO is a clean, data-driven overlay that displays key information about the current symbol directly on your chart — without clutter.
🧭 What it shows
Company & Symbol Info – Name, ticker, sector, industry, market cap
Timeframe Label – Current chart timeframe (auto-formatted)
ATR (14) & % Volatility – With color dots for low 🟢 / medium 🟡 / high 🔴 volatility
Moving Average Status – Indicates if price is above or below the selected MA (default 150)
RSI & RSI-SMA (14) – Compact line with live values and color dot for overbought/neutral/oversold zones
Distance from SMA (50) – Shows how far price is from the 50 MA (+/- %) and grades it A–D by distance 🟢🟠🔴
Earnings Countdown – Days remaining until the next earnings date (if available)
⚙️ Customization
Position (top/middle/bottom, left/center/right)
Text size (default Small), color, opacity (100 %)
Toggle any data row on or off
Choose compact or verbose labels
🧩 Purpose
JiNFO replaces bulky data panels with a lightweight, transparent information layer — perfect for traders who want essential fundamentals, volatility, and technical context at a glance.
Machine Learning Moving Average [BackQuant]Machine Learning Moving Average  
 A powerful tool combining clustering, pseudo-machine learning, and adaptive prediction, enabling traders to understand and react to price behavior across multiple market regimes (Bullish, Neutral, Bearish). This script uses a dynamic clustering approach based on percentile thresholds and calculates an adaptive moving average, ideal for forecasting price movements with enhanced confidence levels. 
 What is Percentile Clustering? 
Percentile clustering is a method that sorts and categorizes data into distinct groups based on its statistical distribution. In this script, the clustering process relies on the percentile values of a composite feature (based on technical indicators like RSI, CCI, ATR, etc.). By identifying key thresholds (lower and upper percentiles), the script assigns each data point (price movement) to a cluster (Bullish, Neutral, or Bearish), based on its proximity to these thresholds.
This approach mimics aspects of machine learning, where we “train” the model on past price behavior to predict future movements. The key difference is that this is not true machine learning; rather, it uses data-driven statistical techniques to "cluster" the market into patterns.
 Why Percentile Clustering is Useful 
 
 Clustering price data into meaningful patterns (Bullish, Neutral, Bearish) helps traders visualize how price behavior can be grouped over time.
 By leveraging past price behavior and technical indicators, percentile clustering adapts dynamically to evolving market conditions.
 It helps you understand whether price behavior today aligns with past bullish or bearish trends, improving market context.
 Clusters can be used to predict upcoming market conditions by identifying regimes with high confidence, improving entry/exit timing.
 
 What This Script Does 
 
 Clustering Based on Percentiles : The script uses historical price data and various technical features to compute a "composite feature" for each bar. This feature is then sorted and clustered based on predefined percentile thresholds (e.g., 10th percentile for lower, 90th percentile for upper).
 Cluster-Based Prediction : Once clustered, the script uses a weighted average, cluster momentum, or regime transition model to predict future price behavior over a specified number of bars.
 Dynamic Moving Average : The script calculates a machine-learning-inspired moving average (MLMA) based on the current cluster, adjusting its behavior according to the cluster regime (Bullish, Neutral, Bearish).
 Adaptive Confidence Levels : Confidence in the predicted return is calculated based on the distance between the current value and the other clusters. The further it is from the next closest cluster, the higher the confidence.
 Visual Cluster Mapping : The script visually highlights different clusters on the chart with distinct colors for Bullish, Neutral, and Bearish regimes, and plots the MLMA line.
 Prediction Output : It projects the predicted price based on the selected method and shows both predicted price and confidence percentage for each prediction horizon.
 Trend Identification : Using the clustering output, the script colors the bars based on the current cluster to reflect whether the market is trending Bullish (green), Bearish (red), or is Neutral (gray).
 
 How Traders Use It 
 
 Predicting Price Movements : The script provides traders with an idea of where prices might go based on past market behavior. Traders can use this forecast for short-term and long-term predictions, guiding their trades.
 Clustering for Regime Analysis : Traders can identify whether the market is in a Bullish, Neutral, or Bearish regime, using that information to adjust trading strategies.
 Adaptive Moving Average for Trend Following : The adaptive moving average can be used as a trend-following indicator, helping traders stay in the market when it’s aligned with the current trend (Bullish or Bearish).
 Entry/Exit Strategy : By understanding the current cluster and its associated trend, traders can time entries and exits with higher precision, taking advantage of favorable conditions when the confidence in the predicted price is high.
 Confidence for Risk Management : The confidence level associated with the predicted returns allows traders to manage risk better. Higher confidence levels indicate stronger market conditions, which can lead to higher position sizes.
 
 Pseudo Machine Learning Aspect 
While the script does not use conventional machine learning models (e.g., neural networks or decision trees), it mimics certain aspects of machine learning in its approach. By using clustering and the dynamic adjustment of a moving average, the model learns from historical data to adjust predictions for future price behavior. The "learning" comes from how the script uses past price data (and technical indicators) to create patterns (clusters) and predict future market movements based on those patterns.
 Why This Is Important for Traders 
 
 Understanding market regimes helps to adjust trading strategies in a way that adapts to current market conditions.
 Forecasting price behavior provides an additional edge, enabling traders to time entries and exits based on predicted price movements.
 By leveraging the clustering technique, traders can separate noise from signal, improving the reliability of trading signals.
 The combination of clustering and predictive modeling in one tool reduces the complexity for traders, allowing them to focus on actionable insights rather than manual analysis.
 
 How to Interpret the Output 
 
 Bullish (Green) Zone : When the price behavior clusters into the Bullish zone, expect upward price movement. The MLMA line will help confirm if the trend remains upward.
 Bearish (Red) Zone : When the price behavior clusters into the Bearish zone, expect downward price movement. The MLMA line will assist in tracking any downward trends.
 Neutral (Gray) Zone : A neutral market condition signals indecision or range-bound behavior. The MLMA line can help track any potential breakouts or trend reversals.
 Predicted Price : The projected price is shown on the chart, based on the cluster's predicted behavior. This provides a useful reference for where the price might move in the near future.
 Prediction Confidence : The confidence percentage helps you gauge the reliability of the predicted price. A higher percentage indicates stronger market confidence in the forecasted move.
 
 Tips for Use 
 
 Combining with Other Indicators : Use the output of this indicator in combination with your existing strategy (e.g., RSI, MACD, or moving averages) to enhance signal accuracy.
 Position Sizing with Confidence : Increase position size when the prediction confidence is high, and decrease size when it’s low, based on the confidence interval.
 Regime-Based Strategy : Consider developing a multi-strategy approach where you use this tool for Bullish or Bearish regimes and a separate strategy for Neutral markets.
 Optimization : Adjust the lookback period and percentile settings to optimize the clustering algorithm based on your asset’s characteristics.
 
 Conclusion 
The  Machine Learning Moving Average   offers a novel approach to price prediction by leveraging percentile clustering and a dynamically adapting moving average. While not a traditional machine learning model, this tool mimics the adaptive behavior of machine learning by adjusting to evolving market conditions, helping traders predict price movements and identify trends with improved confidence and accuracy.
Fractal Dimension Index (FDI) by CoryP1990 – Quant ToolkitThe Fractal Dimension Index (FDI) quantifies how directional or choppy price movement is; in other words, it measures the “roughness” of a trend. FDI values near 1.0–1.3 indicate strong directional trends, while values near 1.5–2.0 reflect chaotic or range-bound behavior. This makes FDI a powerful tool for detecting trend vs. mean-reversion regimes.
 How it works 
Calculates the ratio of average price changes over full and half-length windows to estimate the fractal dimension of price movement.
Teal line = FDI decreasing → trending behavior (market smoother, more directional).
Orange line = FDI increasing → choppiness or consolidation.
Background:
Green tint = trend-friendly regime (FDI below low threshold).
Orange tint = choppy regime (FDI above high threshold).
 Use cases 
Detect when markets shift from trend-following to mean-reverting conditions.
Filter trades: favor trend strategies when FDI < 1.3 and reversion setups when FDI > 1.7.
Combine with momentum or volatility metrics to classify regimes.
 Defaults 
Length = 20
High-FDI threshold = 1.8
Low-FDI threshold = 1.2
 Example — TSLA (1D, 2021) 
Early 2021 trades choppy to sideways with FDI swinging up toward 1.5, then the index drops below 1.2 as Tesla transitions into a persistent trend-friendly regime through the second half of the year (green background). During the Q4 breakout, FDI holds ~1.0–1.2, confirming strong directionality; brief pullbacks lift FDI back toward the mid-range before trending pressure resumes. At the right edge, FDI sits well below the low threshold, signaling that price remains in a trend-supportive state.
 Part of the Quant Toolkit — transparent, open-source indicators for modern quantitative analysis. Built by CoryP1990.
ChainAggLib - library for aggregation of main chain tickersLibrary   "ChainAggLib" 
ChainAggLib — token -> main protocol coin (chain) and top-5 exchange tickers for volume aggregation.
Library only (no plots). All helpers are pure functions and do not modify globals.
 norm_sym(s) 
  Parameters:
     s (string) 
 get_base_from_symbol(full_symbol) 
  Parameters:
     full_symbol (string) 
 get_chain_for_token(token_symbol) 
  Parameters:
     token_symbol (string) 
 get_top5_exchange_tickers_for_chain(chain_code) 
  Parameters:
     chain_code (string) 
 get_top5_exchange_tickers_for_token(token_symbol) 
  Parameters:
     token_symbol (string) 
 join_tickers(arr) 
  Parameters:
     arr (array) 
 contains_symbol(arr, symbol) 
  Parameters:
     arr (array) 
     symbol (string) 
 contains_current(arr) 
  Parameters:
     arr (array) 
 get_arr_for_current_token() 
 get_chain_for_current()
Time & Sales , Volume Delta and CVD, Volume imbalance , Tick
This Pine Script (version 6) creates a comprehensive TradingView indicator combining Time & Sales (Tape) with Volume Delta, Order Flow Pressure Indicator (OFPI), Volume Imbalance detection, Volume Delta (VD) histogram, Cumulative Volume Delta (CVD), TICK.US histogram, and a summary gauge table. It overlays on the chart with customizable tables, boxes, lines, and labels for real-time trade analysis, momentum, imbalances, and volume metrics.
 Key Features and Components: 
 
 Time & Sales Table: A dynamic table showing recent trades (up to user-defined rows). Columns include Time, Side (▲/▼), Last Price, Volume (or Price-Weighted Volume). Trades below a volume threshold are hidden. Includes a buy/sell scale bar with percentages. Supports timeframe-based or live tick data fetching.
 
 
 OFPI with Gauge: Calculates net aggressive volume pressure using bar body position, smoothed with T3 moving average. Displays a centered gauge bar (e.g., "░░░|███░░") indicating bullish/bearish momentum or shifts.
 
 
 Volume Imbalance (VI): Detects bullish/bearish gaps between bars. Draws semi-transparent boxes with labels (e.g., "5 tks (vi)") for imbalances or gaps. Limits display to a max number, removes filled ones, and uses magnets (🧲) for gaps.
 
 
 Volume Delta (VD): Approximates buy/sell delta via intrabar pressure or polarity. Displays as unipolar/bipolar histogram, optionally overlapping with regular volume or TICK.US. Shows numerical values (green/red/orange for divergences) and price/VD divergences.
 
 
 Cumulative Volume Delta (CVD): Cumulates VD, reset on anchor timeframe (e.g., daily). Displays as line, area, baseline, or candles. Includes optional EMA smoothing and background fills. Detects divergences with price.
 
 
 TICK.US Histogram: Overlays US Tick index (from symbol "TICK.US") as positive/negative bars during US market hours (9:30-16:00 ET, Mon-Fri). Replaces regular volume in some modes.
 Gauge Summary Table: Bottom-left table with momentum text, OFPI gauge, CVD value, current Tick, and last bar's volume breakdown (total/buy/sell/delta).
 
 Customization Options: 
 General:  Timezone, date format, table position/size, colors (gradients for up/down), calculation mode (timeframe/live tick), volume type (volume/price-volume), thresholds, lengths (e.g., lookback, smoothing).
 Display:  Heights/offsets for histograms, line widths/styles, transparencies, label sizes/alignments, divergences, MA on volume, CVD smoothing/background.
 Technical:  Lower timeframe precision (auto or custom), anchor for CVD reset, max VIs to show.
Other: Toggles for VI, TICK.US, numerical values, divergences.
 Credit 
// FuturesCall @ fcalgobot.com
//Time & Sales (Tape)  
// CVD base on Luxalgo CVD indicator
// Momentum Gauge by DskyzInvestments
// volume imbalance by ...
LibVPrfLibrary   "LibVPrf" 
This library provides an object-oriented framework for volume
profile analysis in Pine Script®. It is built around the `VProf`
User-Defined Type (UDT), which encapsulates all data, settings,
and statistical metrics for a single profile, enabling stateful
analysis with on-demand calculations.
Key Features:
1.  **Object-Oriented Design (UDT):** The library is built around
the `VProf` UDT. This object encapsulates all profile data
and provides methods for its full lifecycle management,
including creation, cloning, clearing, and merging of profiles.
2.  **Volume Allocation (`AllotMode`):** Offers two methods for
allocating a bar's volume:
- **Classic:** Assigns the entire bar's volume to the close
price bucket.
- **PDF:** Distributes volume across the bar's range using a
statistical price distribution model from the `LibBrSt` library.
3.  **Buy/Sell Volume Splitting (`SplitMode`):** Provides methods
for classifying volume into buying and selling pressure:
- **Classic:** Classifies volume based on the bar's color (Close vs. Open).
- **Dynamic:** A specific model that analyzes candle structure
(body vs. wicks) and a short-term trend factor to
estimate the buy/sell share at each price level.
4.  **Statistical Analysis (On-Demand):** Offers a suite of
statistical metrics calculated using a "Lazy Evaluation"
pattern (computed only when requested via `get...` methods):
- **Central Tendency:** Point of Control (POC), VWAP, and Median.
- **Dispersion:** Value Area (VA) and Population Standard Deviation.
- **Shape:** Skewness and Excess Kurtosis.
- **Delta:** Cumulative Volume Delta, including its
historical high/low watermarks.
5.  **Structural Analysis:** Includes a parameter-free method
(`getSegments`) to decompose a profile into its fundamental
unimodal segments, allowing for modality detection (e.g.,
identifying bimodal profiles).
6.  **Dynamic Profile Management:**
- **Auto-Fitting:** Profiles set to `dynamic = true` will
automatically expand their price range to fit new data.
- **Manipulation:** The resolution, price range, and Value Area
of a dynamic profile can be changed at any time. This
triggers a resampling process that uses a **linear
interpolation model** to re-bucket existing volume.
- **Assumption:** Non-dynamic profiles are fixed and will throw
a `runtime.error` if `addBar` is called with data
outside their initial range.
7.  **Bucket-Level Access:** Provides getter methods for direct
iteration and analysis of the raw buy/sell volume and price
boundaries of each individual price bucket.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 create(buckets, rangeUp, rangeLo, dynamic, valueArea, allot, estimator, cdfSteps, split, trendLen) 
  Construct a new `VProf` object with fixed bucket count & range.
  Parameters:
     buckets (int) : series int        number of price buckets ≥ 1
     rangeUp (float) : series float      upper price bound (absolute)
     rangeLo (float) : series float      lower price bound (absolute)
     dynamic (bool) : series bool       Flag for dynamic adaption of profile ranges
     valueArea (int) : series int        Percentage of total volume to include in the Value Area (1..100)
     allot (series AllotMode) : series AllotMode  Allocation mode `classic` or `pdf`  (default `classic`)
     estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : series LibBrSt.PriceEst PDF model when `model == PDF`. (deflault = 'uniform')
     cdfSteps (int) : series int        even #sub-intervals for Simpson rule (default 20)
     split (series SplitMode) : series SplitMode  Buy/Sell determination (default `classic`)
     trendLen (int) : series int        Look‑back bars for trend factor (default 3)
  Returns: VProf             freshly initialised profile
 method clone(self) 
  Create a deep copy of the volume profile.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf  Profile object to copy
  Returns: VProf  A new, independent copy of the profile
 method clear(self) 
  Reset all bucket tallies while keeping configuration intact.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf  profile object
  Returns: VProf  cleared profile (chaining)
 method merge(self, srcABuy, srcASell, srcRangeUp, srcRangeLo, srcCvd, srcCvdHi, srcCvdLo) 
  Merges volume data from a source profile into the current profile.
If resizing is needed, it performs a high-fidelity re-bucketing of existing
volume using a linear interpolation model inferred from neighboring buckets,
preventing aliasing artifacts and ensuring accurate volume preservation.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         The target profile object to merge into.
     srcABuy (array) : array  The source profile's buy volume bucket array.
     srcASell (array) : array  The source profile's sell volume bucket array.
     srcRangeUp (float) : series float  The upper price bound of the source profile.
     srcRangeLo (float) : series float  The lower price bound of the source profile.
     srcCvd (float) : series float  The final Cumulative Volume Delta (CVD) value of the source profile.
     srcCvdHi (float) : series float  The historical high-water mark of the CVD from the source profile.
     srcCvdLo (float) : series float  The historical low-water mark of the CVD from the source profile.
  Returns: VProf         `self` (chaining), now containing the merged data.
 method addBar(self, offset) 
  Add current bar’s volume to the profile (call once per realtime bar).
classic mode: allocates all volume to the close bucket and classifies
by `close >= open`. PDF mode: distributes volume across buckets by the
estimator’s CDF mass. For `split = dynamic`, the buy/sell share per
price is computed via context-driven piecewise s(u).
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf       Profile object
     offset (int) : series int  To offset the calculated bar
  Returns: VProf       `self` (method chaining)
 method setBuckets(self, buckets) 
  Sets the number of buckets for the volume profile.
Behavior depends on the `isDynamic` flag.
- If `dynamic = true`: Works on filled profiles by re-bucketing to a new resolution.
- If `dynamic = false`: Only works on empty profiles to prevent accidental changes.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf       Profile object
     buckets (int) : series int  The new number of buckets
  Returns: VProf       `self` (chaining)
 method setRanges(self, rangeUp, rangeLo) 
  Sets the price range for the volume profile.
Behavior depends on the `dynamic` flag.
- If `dynamic = true`: Works on filled profiles by re-bucketing existing volume.
- If `dynamic = false`: Only works on empty profiles to prevent accidental changes.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object
     rangeUp (float) : series float  The new upper price bound
     rangeLo (float) : series float  The new lower price bound
  Returns: VProf         `self` (chaining)
 method setValueArea(self, valueArea) 
  Set the percentage of volume for the Value Area. If the value
changes, the profile is finalized again.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf       Profile object
     valueArea (int) : series int  The new Value Area percentage (0..100)
  Returns: VProf       `self` (chaining)
 method getBktBuyVol(self, idx) 
  Get Buy volume of a bucket.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object
     idx (int) : series int    Bucket index
  Returns: series float  Buy volume ≥ 0
 method getBktSellVol(self, idx) 
  Get Sell volume of a bucket.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object
     idx (int) : series int    Bucket index
  Returns: series float  Sell volume ≥ 0
 method getBktBnds(self, idx) 
  Get Bounds of a bucket.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf       Profile object
     idx (int) : series int  Bucket index
  Returns:  
up  series float  The upper price bound of the bucket.
lo  series float  The lower price bound of the bucket.
 method getPoc(self) 
  Get POC information.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf  Profile object
  Returns:  
pocIndex  series int    The index of the Point of Control (POC) bucket.
pocPrice. series float  The mid-price of the Point of Control (POC) bucket.
 method getVA(self) 
  Get Value Area (VA) information.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf  Profile object
  Returns:  
vaUpIndex  series int    The index of the upper bound bucket of the Value Area.
vaUpPrice  series float  The upper price bound of the Value Area.
vaLoIndex  series int    The index of the lower bound bucket of the Value Area.
vaLoPrice  series float  The lower price bound of the Value Area.
 method getMedian(self) 
  Get the profile's median price and its bucket index. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns:     
medianIndex  series int    The index of the bucket containing the Median.
medianPrice  series float  The Median price of the profile.
 method getVwap(self) 
  Get the profile's VWAP and its bucket index. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns:     
vwapIndex    series int    The index of the bucket containing the VWAP.
vwapPrice    series float  The Volume Weighted Average Price of the profile.
 method getStdDev(self) 
  Get the profile's volume-weighted standard deviation. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns: series float  The Standard deviation of the profile.
 method getSkewness(self) 
  Get the profile's skewness. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns: series float  The Skewness of the profile.
 method getKurtosis(self) 
  Get the profile's excess kurtosis. Calculates the value on-demand if stale.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf         Profile object.
  Returns: series float  The Kurtosis of the profile.
 method getSegments(self) 
  Get the profile's fundamental unimodal segments. Calculates on-demand if stale.
Uses a parameter-free, pivot-based recursive algorithm.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf        The profile object.
  Returns: matrix  A 2-column matrix where each row is an   pair.
 method getCvd(self) 
  Cumulative Volume Delta (CVD) like metric over all buckets.
  Namespace types: VProf
  Parameters:
     self (VProf) : VProf      Profile object.
  Returns:  
cvd    series float  The final Cumulative Volume Delta (Total Buy Vol - Total Sell Vol).
cvdHi  series float  The running high-water mark of the CVD as volume was added.
cvdLo  series float  The running low-water mark of the CVD as volume was added.
 VProf 
  VProf  Bucketed Buy/Sell volume profile plus meta information.
  Fields:
     buckets (series int) : int              Number of price buckets (granularity ≥1)
     rangeUp (series float) : float            Upper price range (absolute)
     rangeLo (series float) : float            Lower price range (absolute)
     dynamic (series bool) : bool             Flag for dynamic adaption of profile ranges
     valueArea (series int) : int              Percentage of total volume to include in the Value Area (1..100)
     allot (series AllotMode) : AllotMode        Allocation mode `classic` or `pdf`
     estimator (series PriceEst enum from AustrianTradingMachine/LibBrSt/1) : LibBrSt.PriceEst Price density model when  `model == PDF`
     cdfSteps (series int) : int              Simpson integration resolution (even ≥2)
     split (series SplitMode) : SplitMode        Buy/Sell split strategy per bar
     trendLen (series int) : int              Look‑back length for trend factor (≥1)
     maxBkt (series int) : int              User-defined number of buckets (unclamped)
     aBuy (array) : array     Buy volume per bucket
     aSell (array) : array     Sell volume per bucket
     cvd (series float) : float            Final Cumulative Volume Delta (Total Buy Vol - Total Sell Vol).
     cvdHi (series float) : float            Running high-water mark of the CVD as volume was added.
     cvdLo (series float) : float            Running low-water mark of the CVD as volume was added.
     poc (series int) : int              Index of max‑volume bucket (POC). Is `na` until calculated.
     vaUp (series int) : int              Index of upper Value‑Area bound. Is `na` until calculated.
     vaLo (series int) : int              Index of lower value‑Area bound. Is `na` until calculated.
     median (series float) : float            Median price of the volume distribution. Is `na` until calculated.
     vwap (series float) : float            Profile VWAP (Volume Weighted Average Price). Is `na` until calculated.
     stdDev (series float) : float            Standard Deviation of volume around the VWAP. Is `na` until calculated.
     skewness (series float) : float            Skewness of the volume distribution. Is `na` until calculated.
     kurtosis (series float) : float            Excess Kurtosis of the volume distribution. Is `na` until calculated.
     segments (matrix) : matrix      A 2-column matrix where each row is an   pair. Is `na` until calculated.
LibBrStLibrary   "LibBrSt" 
This is a library for quantitative analysis, designed to estimate
the statistical properties of price movements *within* a single
OHLC bar, without requiring access to tick data. It provides a
suite of estimators based on various statistical and econometric
models, allowing for analysis of intra-bar volatility and
price distribution.
Key Capabilities:
1.  **Price Distribution Models (`PriceEst`):** Provides a selection
of estimators that model intra-bar price action as a probability
distribution over the   range. This allows for the
calculation of the intra-bar mean (`priceMean`) and standard
deviation (`priceStdDev`) in absolute price units. Models include:
- **Symmetric Models:** `uniform`, `triangular`, `arcsine`,
`betaSym`, and `t4Sym` (Student-t with fat tails).
- **Skewed Models:** `betaSkew` and `t4Skew`, which adjust
their shape based on the Open/Close position.
- **Model Assumptions:** The skewed models rely on specific
internal constants. `betaSkew` uses a fixed concentration
parameter (`BETA_SKEW_CONCENTRATION = 4.0`), and `t4Sym`/`t4Skew`
use a heuristic scaling factor (`T4_SHAPE_FACTOR`)
to map the distribution.
2.  **Econometric Log-Return Estimators (`LogEst`):** Includes a set of
econometric estimators for calculating the volatility (`logStdDev`)
and drift (`logMean`) of logarithmic returns within a single bar.
These are unit-less measures. Models include:
- **Parkinson (1980):** A High-Low range estimator.
- **Garman-Klass (1980):** An OHLC-based estimator.
- **Rogers-Satchell (1991):** An OHLC estimator that accounts
for non-zero drift.
3.  **Distribution Analysis (PDF/CDF):** Provides functions to work
with the Probability Density Function (`pricePdf`) and
Cumulative Distribution Function (`priceCdf`) of the
chosen price model.
- **Note on `priceCdf`:** This function uses analytical (exact)
calculations for the `uniform`, `triangular`, and `arcsine`
models. For all other models (e.g., `betaSkew`, `t4Skew`),
it uses **numerical integration (Simpson's rule)** as
an approximation of the cumulative probability.
4.  **Mathematical Functions:** The library's Beta distribution
models (`betaSym`, `betaSkew`) are supported by an internal
implementation of the natural log-gamma function, which is
based on the Lanczos approximation.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 priceStdDev(estimator, offset) 
  Estimates **σ̂** (standard deviation) *in price units* for the current
bar, according to the chosen `PriceEst` distribution assumption.
  Parameters:
     estimator (series PriceEst) : series PriceEst  Distribution assumption (see enum).
     offset (int) : series int       To offset the calculated bar
  Returns: series float     σ̂ ≥ 0 ; `na` if undefined (e.g. zero range).
 priceMean(estimator, offset) 
  Estimates **μ̂** (mean price) for the chosen `PriceEst` within the
current bar.
  Parameters:
     estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
     offset (int) : series int    To offset the calculated bar
  Returns: series float  μ̂ in price units.
 pricePdf(estimator, price, offset) 
  Probability-density under the chosen `PriceEst` model.
**Returns 0** when `p` is outside the current bar’s  .
  Parameters:
     estimator (series PriceEst) : series PriceEst  Distribution assumption (see enum).
     price (float) : series float  Price level to evaluate.
     offset (int) : series int    To offset the calculated bar
  Returns: series float  Density value.
 priceCdf(estimator, upper, lower, steps, offset) 
  Cumulative probability **between** `upper` and `lower` under
the chosen `PriceEst` model. Outside-bar regions contribute zero.
Uses a fast, analytical calculation for Uniform, Triangular, and
Arcsine distributions, and defaults to numerical integration
(Simpson's rule) for more complex models.
  Parameters:
     estimator (series PriceEst) : series PriceEst Distribution assumption (see enum).
     upper (float) : series float  Upper Integration Boundary.
     lower (float) : series float  Lower Integration Boundary.
     steps (int) : series int    # of sub-intervals for numerical integration (if used).
     offset (int) : series int    To offset the calculated bar.
  Returns: series float  Probability mass ∈  .
 logStdDev(estimator, offset) 
  Estimates **σ̂** (standard deviation) of *log-returns* for the current bar.
  Parameters:
     estimator (series LogEst) : series LogEst  Distribution assumption (see enum).
     offset (int) : series int     To offset the calculated bar
  Returns: series float   σ̂ (unit-less); `na` if undefined.
 logMean(estimator, offset) 
  Estimates μ̂ (mean log-return / drift) for the chosen `LogEst`.
The returned value is consistent with the assumptions of the
selected volatility estimator.
  Parameters:
     estimator (series LogEst) : series LogEst  Distribution assumption (see enum).
     offset (int) : series int     To offset the calculated bar
  Returns: series float   μ̂ (unit-less log-return).
Scientific Correlation Testing FrameworkScientific Correlation Testing Framework - Comprehensive Guide
Introduction to Correlation Analysis
What is Correlation?
Correlation is a statistical measure that describes the degree to which two assets move in relation to each other. Think of it like measuring how closely two dancers move together on a dance floor.
Perfect Positive Correlation (+1.0): Both dancers move in perfect sync, same direction, same speed
Perfect Negative Correlation (-1.0): Both dancers move in perfect sync but in opposite directions
Zero Correlation (0): The dancers move completely independently of each other
In financial markets, correlation helps us understand relationships between different assets, which is crucial for:
Portfolio diversification
Risk management
Pairs trading strategies
Hedging positions
Market analysis
Why This Script is Special
This script goes beyond simple correlation calculations by providing:
Two different correlation methods (Pearson and Spearman)
Statistical significance testing to ensure results are meaningful
Rolling correlation analysis to track how relationships change over time
Visual representation for easy interpretation
Comprehensive statistics table with detailed metrics
Deep Dive into the Script's Components
1. Input Parameters Explained-
Symbol Selection:
This allows you to select the second asset to compare with the chart's primary asset
Default is Apple (NASDAQ:AAPL), but you can change this to any symbol
Example: If you're viewing a Bitcoin chart, you might set this to "NASDAQ:TSLA" to see if Bitcoin and Tesla are correlated
Correlation Window (60): This is the number of periods used to calculate the main correlation
Larger values (e.g., 100-500) provide more stable, long-term correlation measures
Smaller values (e.g., 10-50) are more responsive to recent price movements
60 is a good balance for most daily charts (about 3 months of trading days)
Rolling Correlation Window (20): A shorter window to detect recent changes in correlation
This helps identify when the relationship between assets is strengthening or weakening
Default of 20 is roughly one month of trading days
Return Type: This determines how price changes are calculated
Simple Returns: (Today's Price - Yesterday's Price) / Yesterday's Price
Easy to understand: "The asset went up 2% today"
Log Returns: Natural logarithm of (Today's Price / Yesterday's Price)
More mathematically elegant for statistical analysis
Better for time-additive properties (returns over multiple periods)
Less sensitive to extreme values.
Confidence Level (95%): This determines how certain we want to be about our results
95% confidence means we accept a 5% chance of being wrong (false positive)
Higher confidence (e.g., 99%) makes the test more strict
Lower confidence (e.g., 90%) makes the test more lenient
95% is the standard in most scientific research
Show Statistical Significance: When enabled, the script will test if the correlation is statistically significant or just due to random chance.
Display options control what you see on the chart:
Show Pearson/Spearman/Rolling Correlation: Toggle each correlation type on/off
Show Scatter Plot: Displays a scatter plot of returns (limited to recent points to avoid performance issues)
Show Statistical Tests: Enables the detailed statistics table
Table Text Size: Adjusts the size of text in the statistics table
2.Functions explained-
calcReturns():
This function calculates price returns based on your selected method:
Log Returns:
Formula: ln(Price_t / Price_t-1)
Example: If a stock goes from $100 to $101, the log return is ln(101/100) = ln(1.01) ≈ 0.00995 or 0.995%
Benefits: More symmetric, time-additive, and better for statistical modeling
Simple Returns:
Formula: (Price_t - Price_t-1) / Price_t-1
Example: If a stock goes from $100 to $101, the simple return is (101-100)/100 = 0.01 or 1%
Benefits: More intuitive and easier to understand
rankArray():
This function calculates the rank of each value in an array, which is used for Spearman correlation:
How ranking works:
The smallest value gets rank 1
The second smallest gets rank 2, and so on
For ties (equal values), they get the average of their ranks
Example: For values  
Sorted:  
Ranks:   (the two 2s tie for ranks 1 and 2, so they both get 1.5)
Why this matters: Spearman correlation uses ranks instead of actual values, making it less sensitive to outliers and non-linear relationships.
pearsonCorr():
This function calculates the Pearson correlation coefficient:
Mathematical Formula:
r = (nΣxy - ΣxΣy) / √ 
Where x and y are the two variables, and n is the sample size
What it measures:
The strength and direction of the linear relationship between two variables
Values range from -1 (perfect negative linear relationship) to +1 (perfect positive linear relationship)
0 indicates no linear relationship
Example:
If two stocks have a Pearson correlation of 0.8, they have a strong positive linear relationship
When one stock goes up, the other tends to go up in a fairly consistent proportion
spearmanCorr():
This function calculates the Spearman rank correlation:
How it works:
Convert each value in both datasets to its rank
Calculate the Pearson correlation on the ranks instead of the original values
What it measures:
The strength and direction of the monotonic relationship between two variables
A monotonic relationship is one where as one variable increases, the other either consistently increases or decreases
It doesn't require the relationship to be linear
When to use it instead of Pearson:
When the relationship is monotonic but not linear
When there are significant outliers in the data
When the data is ordinal (ranked) rather than interval/ratio
Example:
If two stocks have a Spearman correlation of 0.7, they have a strong positive monotonic relationship
When one stock goes up, the other tends to go up, but not necessarily in a straight-line relationship
tStatistic():
This function calculates the t-statistic for correlation:
Mathematical Formula: t = r × √((n-2)/(1-r²))
Where r is the correlation coefficient and n is the sample size
What it measures:
How many standard errors the correlation is away from zero
Used to test the null hypothesis that the true correlation is zero
Interpretation:
Larger absolute t-values indicate stronger evidence against the null hypothesis
Generally, a t-value greater than 2 (in absolute terms) is considered statistically significant at the 95% confidence level
criticalT() and pValue():
These functions provide approximations for statistical significance testing:
criticalT():
Returns the critical t-value for a given degrees of freedom (df) and significance level
The critical value is the threshold that the t-statistic must exceed to be considered statistically significant
Uses approximations since Pine Script doesn't have built-in statistical distribution functions
pValue():
Estimates the p-value for a given t-statistic and degrees of freedom
The p-value is the probability of observing a correlation as strong as the one calculated, assuming the true correlation is zero
Smaller p-values indicate stronger evidence against the null hypothesis
Standard interpretation:
p < 0.01: Very strong evidence (marked with **)
p < 0.05: Strong evidence (marked with *)
p ≥ 0.05: Weak evidence, not statistically significant
stdev():
This function calculates the standard deviation of a dataset:
Mathematical Formula: σ = √(Σ(x-μ)²/(n-1))
Where x is each value, μ is the mean, and n is the sample size
What it measures:
The amount of variation or dispersion in a set of values
A low standard deviation indicates that the values tend to be close to the mean
A high standard deviation indicates that the values are spread out over a wider range
Why it matters for correlation:
Standard deviation is used in calculating the correlation coefficient
It also provides information about the volatility of each asset's returns
Comparing standard deviations helps understand the relative riskiness of the two assets.
3.Getting Price Data-
price1: The closing price of the primary asset (the chart you're viewing)
price2: The closing price of the secondary asset (the one you selected in the input parameters)
Returns are used instead of raw prices because:
Returns are typically stationary (mean and variance stay constant over time)
Returns normalize for price levels, allowing comparison between assets of different values
Returns represent what investors actually care about: percentage changes in value
4.Information Table-
Creates a table to display statistics
Only shows on the last bar to avoid performance issues
Positioned in the top right of the chart
Has 2 columns and 15 rows
Populating the Table
The script then populates the table with various statistics:
Header Row: "Metric" and "Value"
Sample Information: Sample size and return type
Pearson Correlation: Value, t-statistic, p-value, and significance
Spearman Correlation: Value, t-statistic, p-value, and significance
Rolling Correlation: Current value
Standard Deviations: For both assets
Interpretation: Text description of the correlation strength
The table uses color coding to highlight important information:
Green for significant positive results
Red for significant negative results
Yellow for borderline significance
Color-coded headers for each section
=> Practical Applications and Interpretation
How to Interpret the Results
Correlation Strength
0.0 to 0.3 (or 0.0 to -0.3): Weak or no correlation
The assets move mostly independently of each other
Good for diversification purposes
0.3 to 0.7 (or -0.3 to -0.7): Moderate correlation
The assets show some tendency to move together (or in opposite directions)
May be useful for certain trading strategies but not extremely reliable
0.7 to 1.0 (or -0.7 to -1.0): Strong correlation
The assets show a strong tendency to move together (or in opposite directions)
Can be useful for pairs trading, hedging, or as a market indicator
Statistical Significance
p < 0.01: Very strong evidence that the correlation is real
Marked with ** in the table
Very unlikely to be due to random chance
p < 0.05: Strong evidence that the correlation is real
Marked with * in the table
Unlikely to be due to random chance
p ≥ 0.05: Weak evidence that the correlation is real
Not marked in the table
Could easily be due to random chance
Rolling Correlation
The rolling correlation shows how the relationship between assets changes over time
If the rolling correlation is much different from the long-term correlation, it suggests the relationship is changing
This can indicate:
A shift in market regime
Changing fundamentals of one or both assets
Temporary market dislocations that might present trading opportunities
Trading Applications
1. Portfolio Diversification
Goal: Reduce overall portfolio risk by combining assets that don't move together
Strategy: Look for assets with low or negative correlations
Example: If you hold tech stocks, you might add some utilities or bonds that have low correlation with tech
2. Pairs Trading
Goal: Profit from the relative price movements of two correlated assets
Strategy:
Find two assets with strong historical correlation
When their prices diverge (one goes up while the other goes down)
Buy the underperforming asset and short the outperforming asset
Close the positions when they converge back to their normal relationship
Example: If Coca-Cola and Pepsi are highly correlated but Coca-Cola drops while Pepsi rises, you might buy Coca-Cola and short Pepsi
3. Hedging
Goal: Reduce risk by taking an offsetting position in a negatively correlated asset
Strategy: Find assets that tend to move in opposite directions
Example: If you hold a portfolio of stocks, you might buy some gold or government bonds that tend to rise when stocks fall
4. Market Analysis
Goal: Understand market dynamics and interrelationships
Strategy: Analyze correlations between different sectors or asset classes
Example:
If tech stocks and semiconductor stocks are highly correlated, movements in one might predict movements in the other
If the correlation between stocks and bonds changes, it might signal a shift in market expectations
5. Risk Management
Goal: Understand and manage portfolio risk
Strategy: Monitor correlations to identify when diversification benefits might be breaking down
Example: During market crises, many assets that normally have low correlations can become highly correlated (correlation convergence), reducing diversification benefits
Advanced Interpretation and Caveats
Correlation vs. Causation
Important Note: Correlation does not imply causation
Example: Ice cream sales and drowning incidents are correlated (both increase in summer), but one doesn't cause the other
Implication: Just because two assets move together doesn't mean one causes the other to move
Solution: Look for fundamental economic reasons why assets might be correlated
Non-Stationary Correlations
Problem: Correlations between assets can change over time
Causes:
Changing market conditions
Shifts in monetary policy
Structural changes in the economy
Changes in the underlying businesses
Solution: Use rolling correlations to monitor how relationships change over time
Outliers and Extreme Events
Problem: Extreme market events can distort correlation measurements
Example: During a market crash, many assets may move in the same direction regardless of their normal relationship
Solution:
Use Spearman correlation, which is less sensitive to outliers
Be cautious when interpreting correlations during extreme market conditions
Sample Size Considerations
Problem: Small sample sizes can produce unreliable correlation estimates
Rule of Thumb: Use at least 30 data points for a rough estimate, 60+ for more reliable results
Solution:
Use the default correlation length of 60 or higher
Be skeptical of correlations calculated with small samples
Timeframe Considerations
Problem: Correlations can vary across different timeframes
Example: Two assets might be positively correlated on a daily basis but negatively correlated on a weekly basis
Solution:
Test correlations on multiple timeframes
Use the timeframe that matches your trading horizon
Look-Ahead Bias
Problem: Using information that wouldn't have been available at the time of trading
Example: Calculating correlation using future data
Solution: This script avoids look-ahead bias by using only historical data
Best Practices for Using This Script
1. Appropriate Parameter Selection
Correlation Window:
For short-term trading: 20-50 periods
For medium-term analysis: 50-100 periods
For long-term analysis: 100-500 periods
Rolling Window:
Should be shorter than the main correlation window
Typically 1/3 to 1/2 of the main window
Return Type:
For most applications: Log Returns (better statistical properties)
For simplicity: Simple Returns (easier to interpret)
2. Validation and Testing
Out-of-Sample Testing:
Calculate correlations on one time period
Test if they hold in a different time period
Multiple Timeframes:
Check if correlations are consistent across different timeframes
Economic Rationale:
Ensure there's a logical reason why assets should be correlated
3. Monitoring and Maintenance
Regular Review:
Correlations can change, so review them regularly
Alerts:
Set up alerts for significant correlation changes
Documentation:
Keep notes on why certain assets are correlated and what might change that relationship
4. Integration with Other Analysis
Fundamental Analysis:
Combine correlation analysis with fundamental factors
Technical Analysis:
Use correlation analysis alongside technical indicators
Market Context:
Consider how market conditions might affect correlations
Conclusion
This Scientific Correlation Testing Framework provides a comprehensive tool for analyzing relationships between financial assets. By offering both Pearson and Spearman correlation methods, statistical significance testing, and rolling correlation analysis, it goes beyond simple correlation measures to provide deeper insights.
For beginners, this script might seem complex, but it's built on fundamental statistical concepts that become clearer with use. Start with the default settings and focus on interpreting the main correlation lines and the statistics table. As you become more comfortable, you can adjust the parameters and explore more advanced applications.
Remember that correlation analysis is just one tool in a trader's toolkit. It should be used in conjunction with other forms of analysis and with a clear understanding of its limitations. When used properly, it can provide valuable insights for portfolio construction, risk management, and pair trading strategy development.
LibWghtLibrary   "LibWght" 
This is a library of mathematical and statistical functions
designed for quantitative analysis in Pine Script. Its core
principle is the integration of a custom weighting series
(e.g., volume) into a wide array of standard technical
analysis calculations.
Key Capabilities:
1.  **Universal Weighting:** All exported functions accept a `weight`
parameter. This allows standard calculations (like moving
averages, RSI, and standard deviation) to be influenced by an
external data series, such as volume or tick count.
2.  **Weighted Averages and Indicators:** Includes a comprehensive
collection of weighted functions:
- **Moving Averages:** `wSma`, `wEma`, `wWma`, `wRma` (Wilder's),
`wHma` (Hull), and `wLSma` (Least Squares / Linear Regression).
- **Oscillators & Ranges:** `wRsi`, `wAtr` (Average True Range),
`wTr` (True Range), and `wR` (High-Low Range).
3.  **Volatility Decomposition:** Provides functions to decompose
total variance into distinct components for market analysis.
- **Two-Way Decomposition (`wTotVar`):** Separates variance into
**between-bar** (directional) and **within-bar** (noise)
components.
- **Three-Way Decomposition (`wLRTotVar`):** Decomposes variance
relative to a linear regression into **Trend** (explained by
the LR slope), **Residual** (mean-reversion around the
LR line), and **Within-Bar** (noise) components.
- **Local Volatility (`wLRLocTotStdDev`):** Measures the total
"noise" (within-bar + residual) around the trend line.
4.  **Weighted Statistics and Regression:** Provides a robust
function for Weighted Linear Regression (`wLinReg`) and a
full suite of related statistical measures:
- **Between-Bar Stats:** `wBtwVar`, `wBtwStdDev`, `wBtwStdErr`.
- **Residual Stats:** `wResVar`, `wResStdDev`, `wResStdErr`.
5.  **Fallback Mechanism:** All functions are designed for reliability.
If the total weight over the lookback period is zero (e.g., in
a no-volume period), the algorithms automatically fall back to
their unweighted, uniform-weight equivalents (e.g., `wSma`
becomes a standard `ta.sma`), preventing errors and ensuring
continuous calculation.
---
**DISCLAIMER**
This library is provided "AS IS" and for informational and
educational purposes only. It does not constitute financial,
investment, or trading advice.
The author assumes no liability for any errors, inaccuracies,
or omissions in the code. Using this library to build
trading indicators or strategies is entirely at your own risk.
As a developer using this library, you are solely responsible
for the rigorous testing, validation, and performance of any
scripts you create based on these functions. The author shall
not be held liable for any financial losses incurred directly
or indirectly from the use of this library or any scripts
derived from it.
 wSma(source, weight, length) 
  Weighted Simple Moving Average (linear kernel).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Linear-kernel weighted mean; falls back to
the arithmetic mean if Σweight = 0.
 wEma(source, weight, length) 
  Weighted EMA (exponential kernel).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (simple int) : simple int    Look-back length ≥ 1.
  Returns: series float  Exponential-kernel weighted mean; falls
back to classic EMA if Σweight = 0.
 wWma(source, weight, length) 
  Weighted WMA (linear kernel).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Linear-kernel weighted mean; falls back to
classic WMA if Σweight = 0.
 wRma(source, weight, length) 
  Weighted RMA (Wilder kernel, α = 1/len).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (simple int) : simple int    Look-back length ≥ 1.
  Returns: series float  Wilder-kernel weighted mean; falls back to
classic RMA if Σweight = 0.
 wHma(source, weight, length) 
  Weighted HMA (linear kernel).
  Parameters:
     source (float) : series float  Data to average.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Linear-kernel weighted mean; falls back to
classic HMA if Σweight = 0.
 wRsi(source, weight, length) 
  Weighted Relative Strength Index.
  Parameters:
     source (float) : series float  Price series.
     weight (float) : series float  Weight series.
     length (simple int) : simple int    Look-back length ≥ 1.
  Returns: series float  Weighted RSI; uniform if Σw = 0.
 wAtr(tr, weight, length) 
  Weighted ATR (Average True Range).
Implemented as WRMA on *true range*.
  Parameters:
     tr (float) : series float  True Range series.
     weight (float) : series float  Weight series.
     length (simple int) : simple int    Look-back length ≥ 1.
  Returns: series float  Weighted ATR; uniform weights if Σw = 0.
 wTr(tr, weight, length) 
  Weighted True Range over a window.
  Parameters:
     tr (float) : series float  True Range series.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Weighted mean of TR; uniform if Σw = 0.
 wR(r, weight, length) 
  Weighted High-Low Range over a window.
  Parameters:
     r (float) : series float  High-Low per bar.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 1.
  Returns: series float  Weighted mean of range; uniform if Σw = 0.
 wBtwVar(source, weight, length, biased) 
  Weighted Between Variance (biased/unbiased).
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns:  
variance  series float  The calculated between-bar variance (σ²btw), either biased or unbiased.
sumW      series float  The sum of weights over the lookback period (Σw).
sumW2     series float  The sum of squared weights over the lookback period (Σw²).
 wBtwStdDev(source, weight, length, biased) 
  Weighted Between Standard Deviation.
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  σbtw uniform if Σw = 0.
 wBtwStdErr(source, weight, length, biased) 
  Weighted Between Standard Error.
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  √(σ²btw / N_eff) uniform if Σw = 0.
 wTotVar(mu, sigma, weight, length, biased) 
  Weighted Total Variance (= between-group + within-group).
Useful when each bar represents an aggregate with its own
mean* and pre-estimated σ (e.g., second-level ranges inside a
1-minute bar). Assumes the *weight* series applies to both the
group means and their σ estimates.
  Parameters:
     mu (float) : series float  Group means (e.g., HL2 of 1-second bars).
     sigma (float) : series float  Pre-estimated σ of each group (same basis).
     weight (float) : series float  Weight series (volume, ticks, …).
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns:  
varBtw  series float  The between-bar variance component (σ²btw).
varWtn  series float  The within-bar variance component (σ²wtn).
sumW    series float  The sum of weights over the lookback period (Σw).
sumW2   series float  The sum of squared weights over the lookback period (Σw²).
 wTotStdDev(mu, sigma, weight, length, biased) 
  Weighted Total Standard Deviation.
  Parameters:
     mu (float) : series float  Group means (e.g., HL2 of 1-second bars).
     sigma (float) : series float  Pre-estimated σ of each group (same basis).
     weight (float) : series float  Weight series (volume, ticks, …).
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  σtot.
 wTotStdErr(mu, sigma, weight, length, biased) 
  Weighted Total Standard Error.
SE = √( total variance / N_eff ) with the same effective sample
size logic as `wster()`.
  Parameters:
     mu (float) : series float  Group means (e.g., HL2 of 1-second bars).
     sigma (float) : series float  Pre-estimated σ of each group (same basis).
     weight (float) : series float  Weight series (volume, ticks, …).
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  √(σ²tot / N_eff).
 wLinReg(source, weight, length) 
  Weighted Linear Regression.
  Parameters:
     source (float) : series float   Data series.
     weight (float) : series float   Weight series.
     length (int) : series int     Look-back length ≥ 2.
  Returns:  
mid        series float  The estimated value of the regression line at the most recent bar.
slope      series float  The slope of the regression line.
intercept  series float  The intercept of the regression line.
 wResVar(source, weight, midLine, slope, length, biased) 
  Weighted Residual Variance.
linear regression – optionally biased (population) or
unbiased (sample).
  Parameters:
     source (float) : series float   Data series.
     weight (float) : series float   Weighting series (volume, etc.).
     midLine (float) : series float   Regression value at the last bar.
     slope (float) : series float   Slope per bar.
     length (int) : series int     Look-back length ≥ 2.
     biased (bool) : series bool    true  → population variance (σ²_P), denominator ≈ N_eff.
false → sample variance (σ²_S), denominator ≈ N_eff - 2.
(Adjusts for 2 degrees of freedom lost to the regression).
  Returns:  
variance  series float  The calculated residual variance (σ²res), either biased or unbiased.
sumW      series float  The sum of weights over the lookback period (Σw).
sumW2     series float  The sum of squared weights over the lookback period (Σw²).
 wResStdDev(source, weight, midLine, slope, length, biased) 
  Weighted Residual Standard Deviation.
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     midLine (float) : series float  Regression value at the last bar.
     slope (float) : series float  Slope per bar.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  σres; uniform if Σw = 0.
 wResStdErr(source, weight, midLine, slope, length, biased) 
  Weighted Residual Standard Error.
  Parameters:
     source (float) : series float  Data series.
     weight (float) : series float  Weight series.
     midLine (float) : series float  Regression value at the last bar.
     slope (float) : series float  Slope per bar.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population (biased); false → sample.
  Returns: series float  √(σ²res / N_eff); uniform if Σw = 0.
 wLRTotVar(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Total Variance **around the
window’s weighted mean μ**.
σ²_tot =  E_w    ⟶  *within-group variance*
+ Var_w   ⟶  *residual variance*
+ Var_w   ⟶  *trend variance*
where each bar i in the look-back window contributes
m_i   = *mean*      (e.g. 1-sec HL2)
σ_i   = *sigma*     (pre-estimated intrabar σ)
w_i   = *weight*    (volume, ticks, …)
ŷ_i   = b₀ + b₁·x   (value of the weighted LR line)
r_i   = m_i − ŷ_i   (orthogonal residual)
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns:  
varRes  series float  The residual variance component (σ²res).
varWtn  series float  The within-bar variance component (σ²wtn).
varTrd  series float  The trend variance component (σ²trd), explained by the linear regression.
sumW    series float  The sum of weights over the lookback period (Σw).
sumW2   series float  The sum of squared weights over the lookback period (Σw²).
 wLRTotStdDev(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Total Standard Deviation.
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns: series float  √(σ²tot).
 wLRTotStdErr(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Total Standard Error.
SE = √( σ²_tot / N_eff )  with N_eff = Σw² / Σw²  (like in wster()).
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns: series float  √((σ²res, σ²wtn, σ²trd) / N_eff).
 wLRLocTotStdDev(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Local Total Standard Deviation.
Measures the total "noise" (within-bar + residual) around the trend.
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns: series float  √(σ²wtn + σ²res).
 wLRLocTotStdErr(mu, sigma, weight, midLine, slope, length, biased) 
  Weighted Linear-Regression Local Total Standard Error.
  Parameters:
     mu (float) : series float  Per-bar mean m_i.
     sigma (float) : series float  Pre-estimated σ_i of each bar.
     weight (float) : series float  Weight series w_i (≥ 0).
     midLine (float) : series float  Regression value at the latest bar (ŷₙ₋₁).
     slope (float) : series float  Slope b₁ of the regression line.
     length (int) : series int    Look-back length ≥ 2.
     biased (bool) : series bool   true → population; false → sample.
  Returns: series float  √((σ²wtn + σ²res) / N_eff).
 wLSma(source, weight, length) 
  Weighted Least Square Moving Average.
  Parameters:
     source (float) : series float   Data series.
     weight (float) : series float   Weight series.
     length (int) : series int     Look-back length ≥ 2.
  Returns: series float   Least square weighted mean. Falls back
to unweighted regression if Σw = 0.
Central Limit Theorem Reversion IndicatorDear TV community, let me introduce you to the first-ever Central Limit Theorem indicator on TradingView.
The Central Limit Theorem is used in statistics and it can be quite useful in quant trading and understanding market behaviors. 
In short, the CLT states: "When you take repeated samples from any population and calculate their averages, those averages will form a normal (bell curve) distribution—no matter what the original data looks like."
In this CLT indicator, I use statistical theory to identify high-probability mean reversion opportunities in the markets. It calculates statistical confidence bands and z-scores to identify when price movements deviate significantly from their expected distribution, signaling potential reversion opportunities with quantifiable probability levels.
 Mathematical Foundation 
The Central Limit Theorem (CLT) says that when you average many data points together, those averages will form a predictable bell-curve pattern, even if the original data is completely random and unpredictable (which often is in the markets). This works no matter what you're measuring, and it gets more reliable as you use more data points.
Why using it for trading?
Individual price movements seem random and chaotic, but when we look at the average of many price movements, we can actually predict how they should behave statistically. This lets us spot when prices have moved "too far" from what's normal—and those extreme moves tend to snap back (mean reversion).
Key Formula:
Z = (X̄ - μ) / (σ / √n)
Where:
- X̄ = Sample mean (average return over n periods)
- μ = Population mean (long-term expected return)
- σ = Population standard deviation (volatility)
- n = Sample size
- σ/√n = Standard error of the mean
 How I Apply CLT 
Step 1: Calculate Returns
Measures how much price changed from one bar to the next (using logarithms for better statistical properties)
Step 2: Average Recent Returns
Takes the average of the last n returns (e.g., last 100 bars). This is your "sample mean."
Step 3: Find What's "Normal"
Looks at historical data to determine: a) What the typical average return should be (the long-term mean) and b) How volatile the market usually is (standard deviation)
Step 4: Calculate Standard Error
Determines how much sample averages naturally vary. Larger samples = smaller expected variation.
Step 5: Calculate Z-Score
Measures how unusual the current situation is.
Step 6: Draw Confidence Bands
Converts these statistical boundaries into actual price levels on your chart, showing where price is statistically expected to stay 95% and 99% of the time.
 Interpretation & Usage 
The Z-Score:
The z-score tells you how statistically unusual the current price deviation is:
|Z| < 1.0 → Normal behavior, no action
|Z| = 1.0 to 1.96 → Moderate deviation, watch closely
|Z| = 1.96 to 2.58 → Significant deviation (95%+), consider entry
|Z| > 2.58 → Extreme deviation (99%+), high probability setup
The Confidence Bands
- Upper Red Bands: 95% and 99% overbought zones → Expect mean reversion downward as the price is not likely to cross these lines. 
- Center Gray Line: Statistical expectation (fair value)
- Lower Blue Bands: 95% and 99% oversold zones → Expect mean reversion upward
Trading Logic:
- When price exceeds the upper 95% band (z-score > +1.96), there's only a 5% probability this is random noise → Strong sell/short signal
- When price falls below the lower 95% band (z-score < -1.96), there's a 95% statistical expectation of upward reversion → Strong buy/long signal
Background Gradient
The background color provides real-time visual feedback:
- Blue shades: Oversold conditions, expect upward reversion
- Red shades: Overbought conditions, expect downward reversion
- Intensity: Darker colors indicate stronger statistical significance
 Trading Strategy Examples 
Hypothetically, this is how the indicator could be used: 
- Long: Z-score < -1.96 (below 95% confidence band)
- Short: Z-score > +1.96 (above 95% confidence band)
- Take profit when price returns to center line (Z ≈ 0)
 Input Parameters 
Sample Size (n) - Default: 100
Lookback Period (m) - Default: 100
You can also create alerts based on the indicator. 
Final notes: 
- The indicator uses logarithmic returns for better statistical properties
- Converts statistical bands back to price space for practical use
- Adaptive volatility: Bands automatically widen in high volatility, narrow in low volatility
- No repainting: yay! All calculations use historical data only
Feedback is more than welcome! 
Henri
sensex  9-18-50 + VWAP (VWAP-close confirmation)Description:
This script plots EMA 9, 18, and 50 along with VWAP to identify directional bias in Sensex. A buy or sell signal is generated only when all three EMAs align in sequence and a confirmed 7-minute candle closes above or below the VWAP, helping filter trades with institutional bias confirmation.
Nqaba Probable High/Low — Overshoot/Undershoot{Larry Method)This Probable High/Low indicator is an advanced tool inspired by Larry R. Williams’ original projection formulas.
It calculates probable daily highs and lows based on the prior day’s open, high, low, and close, allowing traders to anticipate key intraday price levels with precision.
Stablecoin Liquidity Delta (Aggregate Market Cap Flow)Hi All,
This indicator visualizes the bar-to-bar change in the aggregate market capitalization of major stablecoins, including USDT, USDC, DAI, and others. It serves as a proxy for monitoring on-chain liquidity and measuring capital inflows or outflows across the crypto market.
Stablecoins are the primary liquidity layer of the crypto economy. Their combined market capitalization acts as a mirror of the available fiat-denominated liquidity in digital markets:
🟩 An increase in the total stablecoin market capitalization indicates new issuance (capital entering the market).
🟥 A decrease reflects redemption or burning (liquidity exiting the system).
Tracking these flows helps anticipate macro-level liquidity trends that often lead overall market direction, providing context for broader price movements.
All values are derived from TradingView’s public CRYPTOCAP tickers, which represent the market capitalization of each stablecoin. While minor deviations can occur due to small price fluctuations around the $1 peg, these figures serve as a proxy for circulating supply and net issuance across the stablecoin ecosystem.
Nqaba Probable High/Low{Larry Method}The Probable High/Low indicator is an advanced tool inspired by Larry R. Williams’ original projection formulas.
It calculates probable daily highs and lows based on the prior day’s open, high, low, and close, allowing traders to anticipate key intraday price levels with precision.
This version provides full control over visibility, styling, and historical analysis — making it both educational and powerful for active traders.
Continuation Probability (0–100)This indicator helps measure how likely the current candle trend will continue or reverse, giving a probability score between 0–100.
It combines multiple market factors trend, candle strength, volume, and volatility to create a single, intuitive signal.






















