API Reference
Developer Guides
Financial Time Series
This reasoning model identifies directional trends by predicting how likely price action in the selected timeframe will propagate to lower-frequency timeframes, distinguishing structurally significant market moves from noise.
We recommend using higher frequency data compared to your target forecast horizon, as this leverages the model’s propagation methodology effectively (e.g., use 15 or 30 min data to predict daily moves).
Key Concepts
The reasoning model operates as a unified system where predictions across multiple time horizons work together rather than independently. This integrated approach reveals how market movements propagate across timeframes, uncovering interconnected relationships that traditional single-timeframe approaches fundamentally cannot detect.
The reasoning model assumes that all price movements originate in high-frequency time series before cascading into lower-frequency time series. Therefore, the model analyses price action and only returns trends that it believes will propagate into lower frequency timeframes.
The reasoning model begins its analysis with as little as 10 historical data points. Your results include both the forecast (if identified) and the trend context that shaped it, all in one view. This helps users understand if predictions occur at the beginning, middle, or end of broader trends, making forecasts easier to understand and increasing your confidence in decisions.
Each forecast should be interpreted within its trend context: the first signal in a consecutive trend marks the start of a trend, while consecutive forecasts in the same direction indicate the consistency of the current trend.
The reasoning model delivers immediate market insights without requiring market-specific training data. While predictive out of the box, it can be fine-tuned to optimise performance for specific time horizons through customised post-training. This flexibility allows users to incorporate multiple timeframes or any combination of traditional and alternative datasets of their choosing.
API requests are processed in a FIFO (first-in, first-out) on a shared server, with each request taking approximately 1-2 seconds to complete. Wait times may extend during high-usage periods. For usage without queue delays, contact us at [email protected] to arrange a dedicated environment.
Data Structure
{
"datetime": [
"2005-08-30 00:00:00",
"2005-08-31 00:00:00",
"2005-09-01 00:00:00",
"2005-09-02 00:00:00",
"2005-09-05 00:00:00",
...,
"2025-06-12 00:00:00",
"2025-06-13 00:00:00",
"2025-06-16 00:00:00",
"2025-06-17 00:00:00",
"2025-06-18 00:00:00"
],
"open": [
5228.10009765625,
5255.7998046875,
5296.89990234375,
5328.5,
5326.7998046875,
...,
8864.400390625,
8884.900390625,
8850.599609375,
8875.2001953125,
0.0
],
"high": [
5270.2001953125,
5300.0,
5342.10009765625,
5338.10009765625,
5341.7998046875,
...,
8896.900390625,
8898.599609375,
8902.400390625,
8875.2001953125,
0.0
],
"low": [
5228.10009765625,
5255.7998046875,
5296.89990234375,
5319.60009765625,
5320.60009765625,
...,
8843.599609375,
8822.2001953125,
8850.599609375,
8809.900390625,
0.0
],
"close": [
5255.7998046875,
5296.89990234375,
5328.5,
5326.89990234375,
5337.7998046875,
...,
8884.900390625,
8850.599609375,
8875.2001953125,
8834.0,
0.0
],
"interval": 1,
"interval_unit": "days",
"reasoning_mode": "reactive"
}
{
"2007-10-11T00:00:00": {
"trend_identified": 1.0
},
"2007-11-08T00:00:00": {
"trend_identified": 1.0
},
"2007-11-14T00:00:00": {
"trend_identified": 1.0
},
"2008-07-29T00:00:00": {
"trend_identified": -1.0
},
"2008-08-07T00:00:00": {
"trend_identified": -1.0
},
...
"2025-06-12T00:00:00": {
"trend_identified": -1.0
},
"2025-06-13T00:00:00": {
"trend_identified": -1.0
},
"2025-06-16T00:00:00": {
"trend_identified": -1.0
},
"2025-06-17T00:00:00": {
"trend_identified": -1.0
},
"2025-06-18T00:00:00": {
"trend_identified": -1.0
}
}
Quick Start: Download Price Data
import yfinance as yf
import pandas as pd
def daily(ticker_name, period):
"""
Fetches historical stock data for a given ticker, period, and interval,
then processes and renames the columns for consistency.
Args:
ticker_name (str): The stock ticker symbol (e.g., "AAPL").
period (str): The period for which to fetch data (e.g., "1d", "5d", "1mo", "1y", "max").
Returns:
pd.DataFrame: A DataFrame containing the processed historical price data
with 'datetime', 'open', 'high', 'low', 'close' columns.
"""
# Create a Ticker object for the given ticker symbol
ticker = yf.Ticker(ticker_name)
# Fetch historical data using the specified period and interval
data = ticker.history(period=period, interval="1d")
# Reset the index of the DataFrame. 'Date' will become a regular column instead of the index.
data = data.reset_index(drop=False)
# Select only the relevant columns: 'Date', 'Open', 'High', 'Low', 'Close'
data = data[['Date', 'Open', 'High', 'Low', 'Close']]
# Remove timezone information from the 'Date' column to simplify datetime handling.
data['Date'] = data['Date'].dt.tz_localize(None)
# Rename columns to a more consistent, lowercase format
data.rename(columns={'Date': 'datetime', 'Open': 'open', 'High': 'high', 'Low': 'low', 'Close': 'close'}, inplace=True)
# Convert the 'datetime' column to a standardised string format "%Y-%m-%d %H:%M:%S"
data['datetime'] = pd.to_datetime(data['datetime'], format="%Y-%m-%d").dt.strftime("%Y-%m-%d %H:%M:%S")
# Return the processed DataFrame
return data
ticker_name = "AAPL"
period = "max"
price_data = daily(ticker_name, period)
# For this example, predict if a trend has been identified for the last period
price_data.loc[price_data.index[-1], ['open', 'high', 'low', 'close']] = 0
print(price_data)
# Save APPL data to csv file
price_data.to_csv('AAPL_Daily.csv',index=False)
import requests
import pandas as pd
import time
from datetime import datetime
def download_binance_historical_klines(symbol, interval, start_time, end_time, output_file=None):
"""
Download historical klines (candlestick data) from Binance between two dates.
Parameters:
symbol (str): Trading pair symbol (e.g., 'BTCUSDT')
interval (str): Kline interval (e.g., '15m', '1h', '1d')
start_time (str): Start date in format 'YYYY-MM-DD HH:MM:SS'
end_time (str): End date in format 'YYYY-MM-DD HH:MM:SS'
output_file (str): Output file name (CSV)
Returns:
pandas.DataFrame: DataFrame with the historical klines data
"""
# Convert time strings to timestamps in milliseconds
start_ts = int(datetime.strptime(start_time, '%Y-%m-%d %H:%M:%S').timestamp() * 1000)
end_ts = int(datetime.strptime(end_time, '%Y-%m-%d %H:%M:%S').timestamp() * 1000)
# Base endpoint
base_url = "https://api.binance.com/api/v3/klines"
# Initialize empty list to store all klines
all_klines = []
# Binance limit is 1000 candles per request, so we need to make multiple requests
# if the requested time period contains more than 1000 candles
current_start = start_ts
print(f"Downloading {symbol} {interval} data from {start_time} to {end_time}")
while current_start < end_ts:
# Parameters for the API request
params = {
'symbol': symbol,
'interval': interval,
'startTime': current_start,
'endTime': end_ts,
'limit': 1000 # Maximum limit allowed by Binance
}
# Make the request
response = requests.get(base_url, params=params)
# If the request was successful
if response.status_code == 200:
# Parse the response JSON
klines = response.json()
if not klines:
break
# Add klines to the list
all_klines.extend(klines)
# Update the start time for the next request
# Use the timestamp of the last kline + 1 millisecond
current_start = klines[-1][0] + 1
# Print progress
last_date = datetime.fromtimestamp(klines[-1][0]/1000).strftime('%Y-%m-%d %H:%M:%S')
print(f"Downloaded up to {last_date}, {len(all_klines)} records so far")
# Respect rate limits (1200 requests per minute)
time.sleep(0.05)
else:
print(f"Error: {response.status_code}, {response.text}")
break
# Create a DataFrame from the klines data
columns = [
'datetime', 'open', 'high', 'low', 'close', 'volume',
'close_time', 'quote_asset_volume', 'number_of_trades',
'taker_buy_base_asset_volume', 'taker_buy_quote_asset_volume', 'ignore'
]
df = pd.DataFrame(all_klines, columns=columns)
# Convert timestamp columns to datetime
df['datetime'] = pd.to_datetime(df['datetime'], unit='ms')
df['close_time'] = pd.to_datetime(df['close_time'], unit='ms')
# Convert price and volume columns to float
numeric_columns = ['open', 'high', 'low', 'close', 'volume',
'quote_asset_volume', 'taker_buy_base_asset_volume',
'taker_buy_quote_asset_volume']
df[numeric_columns] = df[numeric_columns].astype(float)
# Convert number_of_trades to int
df['number_of_trades'] = df['number_of_trades'].astype(int)
# Sort by timestamp
df.sort_values('datetime', inplace=True)
# Save to CSV if output_file is specified
if output_file:
df.to_csv(output_file, index=False)
print(f"Data saved to {output_file}")
return df
# Define parameters
symbol = 'BTCUSDT'
interval = '1s' # 1 second
# Set your desired start and end dates here
start_time = '2025-06-01 00:00:00'
end_time = '2025-06-07 00:00:00'
# Download the data
price_data = download_binance_historical_klines(symbol, interval, start_time, end_time)
price_data.to_csv('BTCUSDT_1s.csv',index=False)
Code Examples
# Install required libraries
import requests
import json
import pandas as pd
from datetime import datetime, timedelta
import os
from dotenv import load_dotenv
load_dotenv()
def time_series_dict(df, interval, interval_unit, reasoning_mode):
"""
Converts a DataFrame with OHLC price data into a dictionary format.
Parameters:
df (DataFrame): DataFrame containing datetime and OHLC price columns
interval (int): The time interval between data points
interval_unit (str): The unit of time for intervals (seconds, minutes, days)
reasoning_mode (str): Reasoning strategy (proactive or reactive)
Returns:
dict: Dictionary with lists of datetime and price data along with metadata
"""
return {
"datetime": df['datetime'].tolist(), # Convert datetime column to list
"open": df['open'].tolist(), # Convert open prices to list
"high": df['high'].tolist(), # Convert high prices to list
"low": df['low'].tolist(), # Convert low prices to list
"close": df['close'].tolist(), # Convert close prices to list
"interval": interval, # Time frequency value
"interval_unit": interval_unit, # Time unit (seconds, minutes, days)
"reasoning_mode": reasoning_mode # Reasoning approach (proactive/reactive)
}
def dict_to_dataframe(response_dict):
"""
Convert a nested dictionary to a pandas DataFrame with datetime and trend data.
Takes a dictionary where keys are timestamp strings and values are dictionaries
containing 'trend_identified' data. Returns a sorted DataFrame with datetime
and trend_identified columns.
Args:
response_dict (dict): Dictionary with timestamp keys and nested dictionaries
containing 'trend_identified' values
Returns:
pd.DataFrame: DataFrame with 'datetime' and 'trend_identified' columns,
sorted by datetime in ascending order
"""
# Create lists for datetime and trend_identified values
datetimes = []
trends = []
# Extract values from the nested dictionary
for timestamp, data in response_dict.items():
datetimes.append(timestamp)
trends.append(data['trend_identified'])
# Create the DataFrame
df = pd.DataFrame({
'datetime': datetimes,
'trend_identified': trends
})
# Convert datetime strings to pandas datetime objects
df['datetime'] = pd.to_datetime(df['datetime'])
# Sort by datetime
df = df.sort_values('datetime')
return df
price_data = pd.read_csv('AAPL_Daily.csv')
# For this example, predict if a trend has been identified for the last period
price_data.loc[price_data.index[-1], ['open', 'high', 'low', 'close']] = 0
# Select Last 5001 rows
last_5001_rows = price_data.tail(5001)
print(f"Forecasting whether a trend has been identified for {last_5001_rows['datetime'].iloc[-1]}...")
# Define Analysis Parameters
interval = 1
interval_unit = 'days'
reasoning_mode = 'reactive'
# Payload Construction
payload = time_series_dict(last_5001_rows, interval, interval_unit, reasoning_mode)
# Financial Time Series API Endpoint
url = "https://www.sumtyme.tech/agn-reasoning/ts"
# Send API Request
response = requests.post(url, json=payload,headers={"Content-Type": "application/json","X-API-Key":os.environ.get('apikey')})
# Processing the API Response
response_dict = response.json()
# Convert Response to DataFrame
analysis = dict_to_dataframe(response_dict)
print(analysis)
# Save Analysis to .csv file
analysis.to_csv('returned_analysis.csv',index=False)