LLM Block

The LLM Block in GLIK Studio is a node that routes data into a large language model (LLM) — such as OpenAI's GPT or Anthropic's Claude — and returns a generated response. It enables natural language understanding, reasoning, and synthesis to be embedded inside a structured enterprise workflow.

It acts as a dynamic reasoning node for:

  • Decision explanation

  • Ambiguity resolution

  • Freeform synthesis (e.g. summaries, justifications, answers)


🧱 Block vs. Node in GLIK

  • A block is a configurable building element in GLIK Studio's visual interface.

  • A node is an instance of that block placed into a specific workflow (graph).

Think of a block as a type or template — and a node as its usage in a particular context.


🌐 GLIK Studio vs. GLIK Cloud

  • GLIK Studio is the visual builder for composing workflows using blocks (like LLM Block).

  • GLIK Cloud is the managed runtime environment where workflows are executed — it runs and scales your AI-powered applications.

Workflows designed in Studio are published to Cloud.


💼 Enterprise Use Case Context

The LLM Block allows enterprise builders to embed generative reasoning into business logic. It becomes especially useful in:

For Enterprise Developers

  • Fallback logic: Handle uncertain or missing cases in rules-based flows.

  • Complex prompt chains: Inject variables and memory into prompts to drive dynamic reasoning.

  • Human-like summarization: Explain a decision using natural language.

For Executive Decision Makers

  • Transparency: LLMs explain how and why an approval was made.

  • AI as a co-pilot: Use LLMs to evaluate nuanced decisions based on enterprise context.

  • Policy compliance: Summarize which thresholds, rules, or values were triggered.


🛠 Common Configurations

Property
Description

Prompt Input

The system message or instruction for the LLM

Variables Used

Pulls from upstream nodes or session variables

Model Target

Can be set to a default (e.g., GPT-4, Claude)

Output Binding

Stores output in a variable like conversation_notes


📘 Example Usage

In the Expense Policy Decision Engine, the LLM Block is triggered when invoice data is incomplete or ambiguous. It generates an explanation and decision recommendation such as: "Based on the vendor history and lack of an uploaded policy document, this expense should be escalated for manual review."


🧠 Best Practices

  • Always use fallback conditions to avoid over-relying on the LLM

  • Keep prompts deterministic and structured if used in workflows that affect finance, security, or compliance

  • Include memory injection if context across steps is important

In Progress

This page is currently under active development. Content may be incomplete, evolving, or placeholder-only. Please check back later for finalized documentation and fully structured examples.

🚀 Looking Ahead

Future versions of GLIK will support:

  • Fine-tuned models per organization

  • Prompt versioning and audit logs

  • Role-based prompt injection for use-case control

Last updated

Was this helpful?