AI code generation for the Javascript component in Megaladata

Large language models (LLMs) now simplify code development by generating functional code from natural language prompts. For Megaladata’s JavaScript component, custom prompt templates enable rapid script generation, accelerating the creation of analytical workflows. This article explains how to select an LLM, prepare prompts, and apply practical recommendations.

Low-code approach in Megaladata

Megaladata’s low-code platform empowers users to design analytical solutions visually, using pre-built components to minimize manual coding. Workflows function as graphs, with each node performing data importtransformation, analysis, modeling, or export. This approach speeds up development and involves domain experts without deep programming skills.

However, some tasks require custom logic beyond standard components. For these cases, Megaladata offers programmable components, including JavaScript, to embed custom code directly into workflows.

JavaScript component and code generation

The JavaScript component in Megaladata serves as a flexible, programmable processor for tasks such as:

  • Processing complex data or performing extensive computations.
  • Integrating with external services not covered by built-in components.

Coding consumes time and depends on skill level. AI-powered code generation streamlines this process.

Prompt templates

Two prompt templates are available for different Megaladata versions. These templates include concise technical requirements to ensure accurate and detailed LLM queries for generating functional JavaScript code:

Example: Generating a script for OpenRouter API

How to use the template

Follow these steps:

  1. Open your LLM application page
  2. Upload the provided template file
  3. Enter your request

Example request (using ChatGPT-5.2 and the Megaladata 7.3 template):

Hi, I need a JavaScript node script that implements the following functionality: requesting the OpenRouter API to interact with an LLM and displaying the response. Details: Enter the API key in the variable port. Enter the user request in the variable port. The output should be the LLM response in a table. Please include an error message for cases where the service cannot be accessed.

Model response with code

Here’s a ready-made example for the JavaScript node in Megaladata: it takes the API key and the user’s request from the input variables, makes a request to OpenRouter Chat Completions, and outputs a single string with the result (or error).

Important: In Megaladata, to write a row to the output table, you must first call OutputTable.Append() and then OutputTable.Set(...). Otherwise, you’ll encounter errors like 'NewRow' or 'method not supported'.


// Megaladata JavaScript node script (ChakraCore)
// Purpose: Send a user prompt to OpenRouter (LLM) and write the response to OutputTable.
//
// How to use:
// 1) Create TWO input variables for this JavaScript node:
//    - api_key      (String)  : OpenRouter API key
//    - user_request (String)  : Prompt to send to the LLM
//    Optional:
//    - model        (String)  : OpenRouter model id (default is used if missing)
//
// Notes on Megaladata specifics:
// - Use InputVariables to read node variables (passed from the workflow).
// - Use OutputTable to form the output dataset; ALWAYS call OutputTable.Append() before OutputTable.Set(...).
// - Use builtIn/Fetch (fetch/Headers) instead of Node.js/browser APIs.

import { OutputTable, InputVariables, DataType } from "builtIn/Data";
import { fetch, Headers } from "builtIn/Fetch";

// ---- Configuration (variable names in Megaladata) ----
var VAR_API_KEY = "api_key";
var VAR_USER_REQUEST = "user_request";
var VAR_MODEL = "model";

// Default model if InputVariables["model"] is not provided.
// Change it if your OpenRouter account/project requires a different model.
var DEFAULT_MODEL = "openai/gpt-4o-mini";

function asTrimmedString(value) {
    if (value === null || value === undefined) return "";
    return String(value).replace(/^\s+|\s+$/g, "");
}

function getOptionalVarValue(name) {
    // InputVariables.Items[name] returns undefined when the variable is not defined for the node.
    var item = InputVariables.Items[name];
    if (item === undefined) return undefined;
    return item.Value;
}

function getRequiredVarValue(name, humanLabel) {
    var v = getOptionalVarValue(name);
    var s = asTrimmedString(v);
    if (s === "") {
        // Throwing will stop execution; we also write a user-friendly error to OutputTable below.
        throw new Error("Missing required input variable: " + name + (humanLabel ? (" (" + humanLabel + ")") : ""));
    }
    return s;
}

function safeJsonParse(text) {
    try {
        return { ok: true, value: JSON.parse(text) };
    } catch (e) {
        return { ok: false, error: e };
    }
}

async function callOpenRouterChatCompletions(apiKey, model, userRequest) {
    // OpenRouter Chat Completions API endpoint.
    var url = "https://openrouter.ai/api/v1/chat/completions";

    var headers = new Headers({
        "Content-Type": "application/json",
        "Authorization": "Bearer " + apiKey
    });

    var body = {
        model: model,
        messages: [
            { role: "user", content: userRequest }
        ]
    };

    // Important: builtIn/Fetch returns a Response; use .ok/.status and read body via .text()/.json().
    var response = await fetch(url, {
        method: "POST",
        headers: headers,
        body: JSON.stringify(body),
        redirect: "follow"
    });

    var responseText = await response.text();

    if (!response.ok) {
        // Include status and the raw response body for diagnostics (often contains JSON with error details).
        throw new Error("OpenRouter request failed. HTTP " + response.status + " " + response.statusText + ". Body: " + responseText);
    }

    var parsed = safeJsonParse(responseText);
    if (!parsed.ok) {
        throw new Error("OpenRouter returned non-JSON response. Body: " + responseText);
    }

    var json = parsed.value;
    // Typical OpenRouter response shape:
    // { choices: [ { message: { content: "..." } } ], ... }
    var content = "";
    if (json && json.choices && json.choices.length > 0) {
        var choice0 = json.choices[0];
        if (choice0 && choice0.message && choice0.message.content !== undefined && choice0.message.content !== null) {
            content = String(choice0.message.content);
        } else if (choice0 && choice0.text !== undefined && choice0.text !== null) {
            // Some providers may return "text" instead of chat "message.content".
            content = String(choice0.text);
        }
    }

    if (asTrimmedString(content) === "") {
        // Provide the full JSON as a fallback to avoid silently outputting empty results.
        content = JSON.stringify(json);
    }

    return content;
}

(async function main() {
    // Prepare output schema (2 columns: response and error).
    OutputTable.AssignColumns([
        { Name: "response", DisplayName: "LLM Response", DataType: DataType.String },
        { Name: "error", DisplayName: "Error", DataType: DataType.String }
    ]);

    try {
        var apiKey = getRequiredVarValue(VAR_API_KEY, "OpenRouter API key");
        var userRequest = getRequiredVarValue(VAR_USER_REQUEST, "User request/prompt");

        var modelValue = getOptionalVarValue(VAR_MODEL);
        var model = asTrimmedString(modelValue);
        if (model === "") model = DEFAULT_MODEL;

        var llmText = await callOpenRouterChatCompletions(apiKey, model, userRequest);

        OutputTable.Append();
        OutputTable.Set("response", llmText);
        OutputTable.Set("error", null);
    } catch (e) {
        // Error message for accessing the service (network issues, invalid key, API error, parsing error, etc.)
        var msg = (e && e.message) ? String(e.message) : String(e);

        OutputTable.Append();
        OutputTable.Set("response", null);
        OutputTable.Set("error", msg);
    }
})();

Here is how to create variables in the JavaScript node:

In the InputVariables port, create variables with the following names:

  • api_key (API key)
  • user_request (user query)
  • model (optional, e.g., openai/gpt-4o-mini)

This code was added to the JavaScript node and executed with no errors. Still, it may require optimization. Note that LLM outputs vary per request, even with identical prompts. Iterative debugging with the LLM is recommended.

Practical recommendations

  1. Precision in prompts: Define inputs, outputs, and requirements clearly. Ambiguity reduces code quality.

  2. Select the right LLM:

    • Use cloud-based models (e.g., ChatGPT-5 or Gemini 3) for handling long prompts and constraints.
    • Consider local models (e.g., Mamba-2, GPT-OSS-120B) if resources permit.
    • Avoid models like GPT-OSS-20B or Gemma 3:27B, as they often miss prompt details.
  3. Cost awareness: External LLM queries can be expensive. Monitor usage to avoid unnecessary costs.

  4. Documentation: If the LLM supports web search, link to Megaladata’s official documentation for context.

  5. Iterative refinement: Ask the LLM to clarify requirements before generation if details are unclear.

Conclusion

Combining AI code generation with Megaladata’s JavaScript component expands low-code capabilities. This enables rapid development of custom logic for non-standard tasks. With well-structured prompts and the right LLM, users can accelerate workflow creation without needing deep programming expertise.

More on AI and coding from our blog:

See also

Megaladata at Plug and Play Armenia Batch 3 Kick-Off Event
Megaladata at Plug and Play Armenia Batch 3 Kick-Off Event
On April 17, 2026, Megaladata participated in the International Pre-Acceleration Program in Armenia Kick-Off Event, marking the official launch of Plug and Play Armenia Batch 3.
Model Degradation in Machine Learning
Model Degradation in Machine Learning
After deploying a machine learning model, its accuracy often declines over time: a phenomenon called model degradation or AI aging. This occurs due to changes in business processes and the data they...
Megaladata Joins the Plug and Play Armenia Pre-Acceleration Program
Megaladata Joins the Plug and Play Armenia Pre-Acceleration Program
Megaladata has become part of the Plug and Play Armenia Pre-Acceleration Program – Batch 3, selected among 18+ teams from more than 120 applicants this year.

About Megaladata

Megaladata is a low code platform for advanced analytics

A solution for a wide range of business problems that require processing large volumes of data, implementing complex logic, and applying machine learning methods.
GET STARTED!
It's free