AI Agent

The agent framework backbone, with SociaHive as a first-class tool node.

LangChain has the largest production-agent footprint of any framework — Python and JS both, with LangGraph as the go-to for stateful, branching workflows. The langchain-mcp-adapters package (real, on PyPI and npm) gives you a one-line connection to any MCP server, including SociaHive's at https://sociahive.com/api/mcp. Once connected, every SociaHive tool (create_post, generate_flow, get_post_analytics, list_accounts, …) appears as a LangChain Tool you can pass to an agent or wire as a node in a LangGraph graph.

Get Started

Why pair LangChain with SociaHive

MCP support that's already shipping in LangChain

langchain-mcp-adapters is maintained by the LangChain team itself, not a community port — it's the recommended path for any MCP server in their docs. SociaHive sits inside that path identically to filesystem, GitHub, or database MCP servers. No SociaHive-specific Python code, no custom Tool subclass, no schema duplication.

LangGraph for non-trivial social workflows

Plain agents are fine for one-shot calls. LangGraph wins when the workflow has branches that depend on results — 'pull TikTok analytics; if retention < 30%, draft a follow-up Reel script and schedule it; otherwise notify Slack'. Each SociaHive tool becomes a node, and the graph state carries the data forward without you serializing/deserializing it.

Production observability via LangSmith

Every SociaHive tool call traces into LangSmith alongside your model calls and other tool nodes. When a get_post_analytics returns the wrong shape or a create_post hits a tier limit, you see the full request/response in the LangSmith UI — same debugging surface you already use for the rest of your agent, no SociaHive dashboard tab-switching.

Register SociaHive in a LangChain MultiServerMCPClient

The langchain-mcp-adapters package supports MCP servers over both stdio and HTTP. SociaHive runs HTTP, so use the streamable_http transport with header auth. The agent gets every SociaHive tool as a regular LangChain Tool — pass it to AgentExecutor, LangGraph, or any tool-using primitive in the framework.

Python — LangChain agent with SociaHive MCP
# pip install langchain langchain-mcp-adapters langchain-anthropic
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
import os

client = MultiServerMCPClient({
    "sociahive": {
        "transport": "streamable_http",
        "url": "https://sociahive.com/api/mcp",
        "headers": {
            "X-API-Key": os.environ["SOCIAHIVE_API_KEY"],
        },
    },
})

# Tools are discovered from SociaHive's MCP advertisement.
tools = await client.get_tools()

agent = create_react_agent(
    ChatAnthropic(model="claude-sonnet-4-6"),
    tools,
)

result = await agent.ainvoke({
    "messages": [
        ("user", "Pull last 7 days of Instagram + LinkedIn analytics, "
                 "summarise the top 3 posts, and draft a Threads recap."),
    ],
})

A real LangGraph workflow: weekly cross-platform recap with retry policy

A small LangGraph state machine pulls analytics from every SociaHive-connected platform, filters to the top performers, drafts a recap, and stages a scheduled post. Errors on any platform fan out to a fallback branch instead of failing the whole run.

  1. 1Node 'fetch_analytics': calls get_post_analytics for each platform in parallel using LangGraph's parallel branches. SociaHive returns post-level data; the node aggregates by engagement rate.
  2. 2Node 'filter_top': keeps the top 3 posts across all platforms. If any platform errored (rate limit, deauth), the error edge routes to 'notify_slack' instead of breaking the graph.
  3. 3Node 'draft_recap': calls the LLM with the top-3 summaries and your brand voice file. Output is a Threads-length post + a longer LinkedIn variant.
  4. 4Node 'stage_post': calls create_post twice — once per platform — with scheduledAt set to next Sunday 6pm. Returns SociaHive's pending_confirmation envelope for the publish step (publishing is destructive).
  5. 5Node 'notify_human': posts a Slack message with the confirmation URLs so you can approve from the dashboard before SociaHive actually publishes.
LangGraph nodes (excerpt)
# Inside the graph definition. SociaHive tools came from MultiServerMCPClient.

async def fetch_analytics(state):
    by_platform = {
        "instagram": state["accounts"]["instagram_id"],
        "linkedin":  state["accounts"]["linkedin_id"],
        "tiktok":    state["accounts"]["tiktok_id"],
    }
    results = await asyncio.gather(*[
        get_post_analytics_tool.ainvoke({
            "platform": p,
            "accountId": acc_id,
            "since": "2026-04-30",
        })
        for p, acc_id in by_platform.items()
    ], return_exceptions=True)
    return {"analytics": results}

async def stage_post(state):
    # SociaHive returns pending_confirmation here — destructive action.
    res = await create_post_tool.ainvoke({
        "platform":    "linkedin",
        "accountId":   state["accounts"]["linkedin_id"],
        "caption":     state["recap_long"],
        "scheduledAt": state["sunday_6pm_iso"],
    })
    return {"pending_confirmation": res}

# Notes:
#   - generate_flow consumes AI credits (refunded on error).
#   - create_post / schedule_post do NOT consume AI credits.
#   - publish_post_now and activate_flow return pending_confirmation
#     envelopes — surface them to a human via 'notify_human' node.

What You Can Do with LangChain + SociaHive

  • Connect once via langchain-mcp-adapters; every SociaHive tool flows into your existing LangChain agents
  • Build LangGraph state machines that span Instagram, WhatsApp, LinkedIn, Threads, TikTok and the rest
  • Production-grade observability — LangSmith traces every SociaHive tool call alongside your other steps
  • Same Python or JS code talks to SociaHive in dev and prod; no per-platform glue layer

How to Connect LangChain

1

Create Your SociaHive Account

Sign up at sociahive.com and connect your Instagram Business or Creator account. No ai agent credentials needed yet.

2

Find LangChain in Integrations

Open your SociaHive dashboard, go to Settings > Integrations, and select LangChain from the AI Agent section.

3

Authorize LangChain

Click "Connect" and complete the LangChain authorization flow to grant SociaHive read and write access.

4

Set Up Your LangChain Workspace

Install langchain-mcp-adapters (Python: pip install langchain-mcp-adapters; JS: npm install @langchain/mcp-adapters) and use the MultiServerMCPClient to register SociaHive. Set the SOCIAHIVE_API_KEY env var so the X-API-Key header gets injected automatically — same key works in dev and prod.

5

Test Your LangChain Connection

Send a test DM to your Instagram account and verify the data flows into LangChain. Check that triggers, events, and data mapping work as expected.

Use Cases with LangChain

Other AI Agent Integrations

Frequently asked questions

Does LangChain support MCP servers natively?+

Through the official langchain-mcp-adapters package, yes — maintained by the LangChain team and available on PyPI for Python and npm for JS. It exposes both stdio and streamable HTTP transports; SociaHive uses streamable HTTP. The adapter discovers tools from the server's advertisement, so you don't hand-write Tool classes for SociaHive's 40+ tools — they're just there.

Can I use LangGraph instead of plain LangChain agents?+

Yes — and you probably should for any non-trivial social workflow. LangGraph's state machine model handles branching, retry policies, and parallel branches natively. The MCP-loaded SociaHive tools become regular nodes in the graph; you pass them to create_react_agent or wire them as ToolNodes manually. State carries SociaHive's tool results forward without serialization.

How do destructive actions and confirmations work in a LangChain agent?+

SociaHive returns a pending_confirmation envelope for publish, activate, and delete tool calls. The LangChain tool wrapper surfaces that envelope as a tool result — the agent sees a structured response, not an exception. Your graph should branch on that envelope: route to a 'notify_human' node, post the confirmation URL to Slack or wherever, and resume the graph after approval. Read-only calls execute inline.

Will LangChain see all 40+ SociaHive MCP tools?+

Yes, modulo feature flags. langchain-mcp-adapters fetches the tool list from SociaHive's advertisement at connection time. Tools requiring CONTACT_CRM (list_contacts, get_contact) or AI_COPILOT_DESTRUCTIVE_TOOLS (delete_*) only appear if your account has those flags enabled — same gating as Claude Code or Cursor, applied server-side before the advertisement is built.

Does LangSmith trace SociaHive tool calls?+

Yes. langchain-mcp-adapters wraps MCP tools in standard LangChain Tool primitives, which LangSmith traces by default. Every create_post, generate_flow, get_post_analytics shows up as a tool span with full request/response payloads (PII allow-list applied per SociaHive's audit policy). When a tool errors, the LangSmith trace shows the typed error class — same surface you already use for everything else in your agent.

Comparing AI tools for social media automation? See the full hub: MCP for social media automation — or the developer-focused product page at /mcp.

Connect LangChain with SociaHive

14-day free trial included. Plans from $29/mo. No credit card required.

Get Started