Input Logic & Routing Behavior

This page documents how input variables like user_region, submission_type, and user_role shape the internal logic and execution flow of the Global Control Copilot template.

This page explains how dynamic input variables like user_region, submission_type, and user_role influence policy selection, compliance logic, and escalation behavior within the Global Control Copilot template.

These inputs are extracted via orchestration blocks (e.g., Parameter Extractor) and are used to route submissions through jurisdiction-specific logic paths.


1. Input Sources (Where Values Come From)

Variable
Source Block
Description

sys.files

User Upload

Contains raw disclosure content (risk, governance, jurisdiction).

user_region

Parameter Extractor

Set by upstream chat metadata, dropdown, or auto-detected.

submission_type

Parameter Extractor

Declared by user form or inferred from workflow setup.

user_role

Parameter Extractor

May be prefilled from session or input during form submission.

risk_exposure, checklist_status

Doc Extractor or LLM

Parsed from sys.files, then normalized via enrichment.


2. user_region: Determines Policy Source

Value
Effect

EU

Loads MiCA policy. Enforces thresholds like €1M exposure, 5-day deadline, etc.

US

Loads SOX-based policy logic. Checks for signoffs and SEC-relevant fields.

APAC

Loads fallback or local templates. Often less rigid in checklist validation.

Global

Uses internal generic fallback policy. Used for simulation or pre-alignment testing.


3. submission_type: Determines Logic Branches

Value
Validations Triggered

Disclosure

Requires risk, liquidity, governance sections.

Attestation

Requires signatures, audit metadata, compliance officer role.

Exception Report

Auto-escalates unless a valid policy override is present.

Incident

Evaluates breach detail, event log, and downstream impact.


4. user_role: Affects Prompt Tone & Escalation Rules

Value
Behavior Adjustments

Finance Manager

Triggers capital/risk validation logic.

Legal Counsel

Expects attestation chain, stricter audit validation.

Product Ops

Focus on procedural policy routing; may tolerate soft violations.

External Submitter

Triggers early flagging, stricter trust boundaries.


5. Combined Example

If the submission is from:

  • user_region = EU

  • submission_type = Disclosure

  • user_role = Finance Manager

Then:

  • MiCA logic is loaded (via Knowledge Retrieval)

  • Risk exposure and liquidity sections are required

  • Checklist omission triggers escalation

  • Summary tone addresses internal financial policy compliance


6. Required Blocks to Support This

To enable this dynamic behavior, the template must include:

  • Parameter Extractor → to set user_region, submission_type, user_role

  • Knowledge Retrieval → to fetch the correct policy content

  • Variable Assigner → to inject threshold values (e.g., exposure limits)

  • Doc Extractor or LLM → to parse the uploaded file and pull out key sections

  • IF/ELSE → to branch based on compliance

  • Agent and LLM → to escalate or summarize

  • Save Point → to log and export session metadata


7. Code Block Example: Variable Payload

The following is a sample of how these input variables might appear inside the orchestration memory or test payload. This structure is useful for debugging, test configuration, or SDK integration when simulating different jurisdictional conditions.

# Example Input Variable Payload
user_region: EU
submission_type: Disclosure
user_role: Finance Manager

# Derived values from parsed file (via LLM or Doc Extractor)
risk_exposure: 4200000
checklist_status: Complete
submission_date: 2025-05-24
policy_reference: MiCA
issuer_name: TokenBridge Capital Ltd.

This structure allows downstream logic blocks to reason based on both user-declared context (user_region, submission_type, user_role) and parsed content from uploaded files (risk_exposure, issuer_name, etc.).

For testing or debugging, these fields can be passed directly into a Variable Assigner block or used as fixed test cases during QA.


8. Example LLM Prompt Template (Dynamic Inputs)

This is a sample prompt used inside the LLM reasoning block to evaluate whether the uploaded compliance submission meets applicable policy requirements. It demonstrates how the system references both declared context variables and parsed data.

You are a compliance reasoning agent reviewing a policy disclosure report.

Jurisdiction: {{user_region}}  
Submission Type: {{submission_type}}  
User Role: {{user_role}}  
Policy Framework: {{policy_reference}}  

Evaluate the submission based on the following extracted details:

- Issuer: {{issuer_name}}  
- Reported Risk Exposure: €{{risk_exposure}}  
- Capital Buffer Checklist: {{checklist_status}}  
- Submission Date: {{submission_date}}

Determine whether the submission:
1. Meets the policy thresholds based on the loaded policy ({{policy_reference}})
2. Contains all required sections for the submission type ({{submission_type}})
3. Requires escalation based on missing, incomplete, or delayed information

If the submission is compliant, generate a brief summary of why.  
If not, identify the violations and recommend next steps.

This pattern ensures consistent reasoning across jurisdictions and allows developers to structure prompt logic around contextual input variables without hardcoding static policy rules.


9. Escalation Logic Conditions (Examples)

The Global Control Copilot uses conditional logic blocks (like IF/ELSE) to evaluate whether a submission meets policy thresholds or requires escalation to a human reviewer. These conditions are derived from both user-declared inputs and parsed values.

Here are example escalation triggers based on variable conditions:


🔍 Conditional Logic Triggers

Condition
Explanation
Escalation Action

risk_exposure > 1000000 and user_region == "EU"

MiCA limit exceeded

Escalate to compliance analyst

checklist_status != "Complete" and submission_type == "Disclosure"

Required section missing

Trigger user guidance + escalate

submission_date > deadline_by_policy

Late submission

Flag and route to manual review

user_role == "External Submitter" and attestation_missing == true

Trust boundary exceeded

Auto-escalation with warning

submission_type == "Exception Report" and override_reason == null

No justification provided

Block auto-approval and escalate


🧠 Memory Variables Involved

  • risk_exposure (float)

  • checklist_status (enum: Complete, Incomplete, Missing)

  • submission_date (date)

  • user_region, user_role, submission_type (string)

  • attestation_missing (boolean)

  • override_reason (string/null)


✨ Escalation Outputs

When triggered, these conditions typically route the workflow to:

  • An Agent block (e.g., assigned to a compliance reviewer)

  • An LLM block to generate a detailed explanation for the user

  • A Save Point block for logging the flagged session


10. Auto-Approval Logic Conditions (Examples)

While escalation rules define when to stop and escalate, auto-approval rules help define a “clean pass” scenario — where no manual review is needed.

Condition
Explanation
Auto-Approval Action

risk_exposure <= 1000000 and user_region == "EU"

MiCA threshold respected

Allow approval summary generation

checklist_status == "Complete" and submission_type == "Disclosure"

Disclosure fully filled

Proceed to HTTP API export

attestation_verified == true and submission_type == "Attestation"

External review confirmed

Mark submission valid without escalation

override_reason != null and submission_type == "Exception Report"

Justified exception present

Log with reasoning and approve

submission_date <= policy_deadline

On-time report

Skip delay flag, proceed to approval

These conditions trigger an LLMHTTP RequestSave Point path.


11. Block-Level Wiring Map (Execution Flow Summary)

Below is a high-level description of how these logic conditions are implemented using GLIK Studio blocks:

🔁 Decision Flow in Orchestration

  1. File Upload (sys.files)

    • Ingested at start node or through chat form

  2. Parameter Extractor Block

    • Sets: user_region, submission_type, user_role

  3. Doc Extractor / LLM Parser Block

    • Parses risk_exposure, checklist_status, attestation_verified, submission_date

  4. IF/ELSE Block — “Is Submission Compliant?”

    • Evaluates combined conditions:

      if:
        risk_exposure <= threshold_by_region
        and checklist_status == "Complete"
        and submission_date <= policy_deadline
  5. IF path → Auto-Approval Track

    • LLM Block: generate approval summary

    • HTTP Request: post to compliance API

    • Save Point: log session summary

  6. ELSE path → Escalation Track

    • Agent Block: escalate for review

    • LLM Block: explain rejection or missing elements

    • Save Point: export violation metadata

Last updated

Was this helpful?