Short-term AI memory lives in the context window and resets when the session ends. Long-term AI memory lives in external stores and persists across sessions, agents, and time. Both definitions are correct. Both are also incomplete.
The part the industry has not named: these two memory types require fundamentally different governance architectures. Short-term memory is inherently disposable. If bad data enters a session, it expires naturally when the session closes. Long-term memory is durable — bad data does not expire. It persists, compounds, and becomes the confident ground truth that your agent returns to every user, in every session, until someone manually corrects it.
State of AI Agent Memory 2026 (Mem0) names this problem explicitly: staleness in long-term memory is “unresolved” and produces agents that are “confidently wrong.” The industry’s proposed solution — decay functions, TTL, dynamic forgetting — addresses retrieval recency. It does not address source certification. Those are different problems.
A common pattern among enterprise teams is building one pipeline for both memory types. Session context and institutional knowledge flow into the same vector store, the same embedding space, the same retrieval function. Practitioner communities on Reddit and HN document this consistently: agents retrieving stale business rules with high similarity scores because the embedding space cannot distinguish “this was certified last quarter” from “this was inferred in a conversation last week.” The Infosys Enterprise Memory Architecture report (February 2026) prescribes “curate before storage” as a corrective principle — which implies the uncurated approach is the observed default.
This page explains the distinction, the design requirements that follow from it, and why the long-term memory problem at enterprise scale is not an engineering problem — it is a governance problem.
Quick Comparison Table
Permalink to “Quick Comparison Table”| Dimension | Short-term AI memory | Long-term AI memory |
|---|---|---|
| What it is | Session-scoped, in-context storage (attention window) | External storage that persists across sessions and agents |
| Scope | Single conversation or task | Cross-session, cross-agent, organizational |
| What it does | Holds working state, chat history, current task context | Stores facts, preferences, definitions, institutional knowledge |
| Primary optimization | Speed and throwability — sub-millisecond latency | Provenance, certification, versioning, freshness signals |
| Who owns it | Runtime — managed by the agent framework | Data teams, governance owners, or the memory framework |
| Failure mode | Stale context within session; resets naturally | Confidently wrong facts that persist and compound across every future interaction |
| Governance requirement | Low — bad data expires with the session | High — bad data persists, scales, and becomes ground truth |
| Best example tools | Mem0 selective compression, LangMem, Letta working memory | Vector DBs, knowledge graphs, governed data catalogs |
Learn how these memory types fit into the full system: What Is an AI Memory System? and Memory Layer for AI Agents.
Short-term vs long-term AI memory: what’s the difference?
Permalink to “Short-term vs long-term AI memory: what’s the difference?”Short-term and long-term AI memory are not just different storage tiers. They are different problems with different requirements.
Short-term memory lives inside the active context window. It holds the current conversation, working task state, tool call outputs, and intermediate reasoning — anything the agent needs right now. It is session-scoped: when the session ends, short-term memory resets. This reset is not a limitation. It is by design. An agent carrying every session’s observations forward indefinitely would face noise accumulation, context pollution, and compliance exposure.
Long-term memory lives outside the model in external stores — vector databases, knowledge graphs, key-value stores, relational databases. It persists across sessions, across agents, and potentially across the entire organization. This is the information the agent “knows” regardless of whether it was trained on it: past interaction patterns, learned preferences, domain facts, business definitions.
The architectural distinction is well-established across the field. A 218-paper survey on AI agent memory (arXiv:2602.06052, February 2026) maps this across temporal scales, identifying five core operations (storage, retrieval, update, compression, forgetting) and a taxonomy spanning working, episodic, semantic, and procedural memory. The hot/warm/cold tier architecture documented at Analytics Vidhya is the standard implementation pattern.
What the field has not named: these two memory types have fundamentally different governance requirements, not just different storage architectures. This distinction matters more than the architectural one.
The governance split no one names
Permalink to “The governance split no one names”Short-term memory is tolerant of low governance. It is throwable by design. A session that receives bad data — stale metrics, injected noise, an incorrect assumption — corrects itself when the session ends. The blast radius is bounded to one conversation.
Long-term memory is intolerant of ungoverned inputs. Persistence is the property that makes governance mandatory. A fact that enters long-term memory with incorrect provenance does not expire. It gets retrieved with high similarity scores in future sessions. It becomes the confident wrong answer that the agent repeats at scale, across hundreds of users, until someone detects it — which, per the Governed Memory paper (arXiv:2603.17787, March 2026), may not happen for months: “no per-property accuracy monitoring — fields are wrong for months before detection.”
The failure to name this distinction is why most enterprise agent deployments produce “confidently wrong” answers at scale.
What is short-term AI memory?
Permalink to “What is short-term AI memory?”Short-term AI memory is the information an agent holds within its active context window during a session. It includes the current conversation history, working task state, tool call outputs, intermediate reasoning steps, and any in-context retrieval results from the current interaction.
It exists only as long as the session is alive. When the session ends, the context window clears. This is the defining property of short-term memory: it is session-scoped.
The primary optimization target for short-term memory is speed. Voice AI agents — the fastest-growing use case for memory frameworks in 2026, per Mem0’s State of AI Agent Memory 2026 — require sub-millisecond in-context retrieval. According to Redis’s analysis of AI agent memory architectures, FIFO queues with in-memory storage achieve the microsecond-scale latency that real-time conversational agents require.
See the full pipeline in How AI Memory Systems Work.
Short-term AI memory has five core components:
Context window
Permalink to “Context window”The active attention space — bounded by the model’s token limit. Modern models offer 128K to 1M+ tokens of context, but token cost and latency increase with window size. The context window is not infinite working memory; it is a bounded computational resource.
Working memory buffer
Permalink to “Working memory buffer”The current task state, tool outputs, and intermediate reasoning steps held in-context. This is what the agent is actively processing — not past history, not stored knowledge, but the live material of the current task.
Conversation history
Permalink to “Conversation history”Prior turns in the current session, managed via FIFO eviction as the window approaches its token limit. Oldest context is dropped first. Recency is the eviction criterion.
Recursive summarization
Permalink to “Recursive summarization”The MemGPT/Letta mechanism for compressing older context before eviction to preserve semantic continuity. Rather than discarding old turns entirely, the agent summarizes them, preserving key facts in compressed form while freeing token budget. Details in the MemGPT foundational paper (arXiv:2310.08560).
Selective compression
Permalink to “Selective compression”Mem0’s approach: rather than summarizing chronologically, extract only salient facts from conversation history. Analytics Vidhya’s framework comparison documents Mem0 achieving 80% token reduction via selective compression — a significant efficiency gain for high-volume session contexts.
What is long-term AI memory?
Permalink to “What is long-term AI memory?”Long-term AI memory is information stored in external systems that persists across sessions, users, and agents. It is what the agent “knows” that was not in its training data and is not in its current context window.
The scope is cross-session and potentially organizational. A fact in long-term memory is available to any agent with retrieval access, in any future session. This is both the power and the risk of long-term memory: the same persistence that makes it useful makes bad data dangerous.
A note on scope: not all long-term memory carries the same governance burden. User preference memory — conversation tone, interface settings, personal interaction history — is legitimate long-term memory with a relatively low governance bar; errors are contained to individual user experience. Organizational semantic memory — metric definitions, business rules, certified data glossary terms — has a high governance bar; errors scale to every agent session that retrieves them, across every user, indefinitely. This page focuses on the latter: enterprise data agents operating on organizational semantic memory, where the consequences of ungoverned long-term storage are measurable and persistent.
The standard taxonomy, documented in the 218-paper survey (arXiv:2602.06052), splits long-term memory into episodic (past interaction records), semantic (facts about the world or the organization), and procedural (learned task patterns). The Governed Memory paper (arXiv:2603.17787) identifies five structural problems that emerge in ungoverned long-term memory: memory silos, governance fragmentation, unstructured dead-ends, context redundancy, and silent degradation.
Mem0’s 2026 report names the core operational problem: staleness is “unresolved.” A memory about a user’s employer, a metric definition, a business rule — all are “highly relevant until they are not, at which point they become confidently wrong.” Dynamic forgetting addresses retrieval recency. It does not address source certification.
For enterprise context, see Enterprise AI Memory Layer: Architecture for Data Leaders and Types of AI Agent Memory.
Long-term AI memory has five core components:
Episodic memory
Permalink to “Episodic memory”Records of past interactions, task outcomes, and user-specific history. Stored in vector databases or temporal knowledge graphs. Zep/Graphiti uses bi-temporal tracking (valid_at, expired_at) to record when facts were valid — the most governance-aware approach in the open-source ecosystem for episodic memory, though it tracks validity windows rather than source authorization.
Semantic memory
Permalink to “Semantic memory”Facts about the domain, the organization, or the world. Metric definitions, business rules, entity relationships. This is the long-term memory type that most directly requires governance for enterprise reliability — a stale or uncertified metric definition is not a retrieval problem, it is an accuracy problem that scales with every agent session that retrieves it.
Procedural memory
Permalink to “Procedural memory”Learned patterns for completing specific task types — which tools to invoke in sequence, how to format an output, which data sources to query for a given request type. More stable than semantic memory; less susceptible to staleness-driven error.
Organizational context memory
Permalink to “Organizational context memory”The fifth memory type identified in Atlan’s published taxonomy of AI agent memory types: certified definitions with version history, approval timestamps, and ownership records. This is the type that no standalone memory framework supports natively, because it requires governance infrastructure that sits upstream of the memory system itself.
Provenance layer
Permalink to “Provenance layer”Origin signature, version chain, source agent tag. Present in advanced frameworks — MemOS’s MemCube model (arXiv:2507.03724) introduces provenance-aware memory units; Mem0’s Actor-Aware Memories (June 2025) tag memories by source agent. These are the ecosystem’s most sophisticated governance features. They track how a fact entered memory. They do not answer whether the source was authorized to assert it.
Long-term vs short-term AI memory: head-to-head comparison
Permalink to “Long-term vs short-term AI memory: head-to-head comparison”The deepest differences between short-term and long-term memory are not in storage substrate. They are in governance requirements and failure modes.
Short-term failures are local and self-correcting. Context overflow causes session confusion. FIFO eviction drops older turns, occasionally losing relevant context. The agent gets slower as the window fills. All of these resolve on session restart. No residue, no compounding error, no persistent damage.
Long-term failures are global and persistent. Per the Governed Memory paper (arXiv:2603.17787), silent degradation means “fields are wrong for months before detection.” There is no equivalent of a session restart for long-term memory. A wrong fact retrieved with high similarity scores propagates across every future interaction until someone identifies and corrects it — which requires monitoring infrastructure that most teams have not built.
The Mem0 LOCOMO benchmark (State of AI Agent Memory 2026) reveals a related tradeoff at the system level: full-context retrieval achieves 72.9% accuracy but requires 9.87 seconds median latency and approximately 26,000 tokens per conversation, making it production-impractical. Selective approaches achieve 91% lower latency at the cost of 6 percentage points of accuracy. This tradeoff is real and unsolved at the retrieval level. It does not address source-level certification at all.
Detailed comparison
Permalink to “Detailed comparison”| Dimension | Short-term memory | Long-term memory |
|---|---|---|
| Primary focus | Speed and in-session coherence | Cross-session persistence and reliability |
| Storage substrate | In-context (attention), FIFO queues, RAM | Vector DBs (HNSW/IVF), knowledge graphs, key-value stores |
| Latency target | Sub-millisecond (voice AI requirement) | Milliseconds to hundreds of milliseconds acceptable |
| Key stakeholder | Agent framework / runtime | Data governance team, memory ops, or data catalog owner |
| Certification requirement | None — session-scoped data is inherently transient | Required — uncertified facts persist and compound as ground truth |
| Failure mode | Context overflow, session-level confusion | Silent degradation — “confidently wrong” facts recalled at high similarity scores |
| Staleness handling | N/A — resets on session end | Unresolved in most frameworks; requires active monitoring or a governed source |
| Governance framework | None needed — throwable by design | Provenance (origin, version, ownership), freshness signals, access control |
| Tooling approach | Mem0 selective compression, LangMem, Letta paging | Vector DBs + governance layer, or governed data catalog as source |
| Maturity indicator | Sub-millisecond retrieval, clean context handoff | Zero silent degradation, certified semantic facts, audit-ready provenance |
Short-term memory fails loudly and locally. Long-term memory fails silently and at scale.
Real-world example
Permalink to “Real-world example”Consider a data analyst agent at a financial services firm. During a session, the agent holds the current SQL draft, the last three tool outputs, and the analyst’s clarifying questions in its context window. All of this is short-term memory. When the session ends, it clears. If the SQL draft had a bug, that bug disappears with the session.
Now consider the agent’s long-term semantic memory: the certified definition of net_revenue. If that definition was ingested into a vector store six months ago and the business has since revised its calculation methodology — excluding intercompany transactions, for example — the agent returns the old definition with high similarity confidence. It is confidently wrong. Every analyst session that asks about net revenue receives the same wrong answer until someone audits the vector store and manually corrects the embedding.
This is not a retrieval engineering problem. It is a source-of-truth problem. The fix cannot come from the retrieval layer — it must come from the source layer, before the fact is ingested into long-term memory at all.
For context on why the cold-start variant of this problem is equally damaging, see The AI Agent Cold-Start Problem Explained.
See how enterprise teams structure governed long-term memory alongside session context.
Get the Stack GuideHow short-term and long-term memory work together
Permalink to “How short-term and long-term memory work together”The field consensus is “you need both.” Short-term memory handles fast in-session reasoning. Long-term memory supplies cross-session knowledge. This is correct — and incomplete. The challenge is that these two types have opposite optimization targets. A unified pipeline degrades both.
Tiered memory architecture (hot/warm/cold)
Permalink to “Tiered memory architecture (hot/warm/cold)”The standard architecture uses three tiers. Hot tier (short-term) handles active session context with sub-millisecond retrieval. Cold tier (long-term) holds persistent semantic and episodic knowledge — slower to retrieve, but the source of organizational truth. Warm tier manages recent cross-session summaries: content too recent to be fully indexed but no longer in active context.
The agent composes context from all three tiers at inference time. Short-term contributes session-specific intent. Long-term contributes stable institutional knowledge. Warm tier bridges the gap. The architecture is sound. The constraint is at the cold tier’s input gate.
Eviction into long-term memory — the certification gate
Permalink to “Eviction into long-term memory — the certification gate”When session context is evicted from the context window, what gets promoted to long-term storage? This gate is where most teams fail. Session observations — conversation summaries, inferred user preferences, task outcomes — enter the same vector store as certified institutional facts. No governance distinction is applied.
The Infosys Enterprise Memory Architecture blog (February 2026) calls this the core design error: “Do not copy chaos. Connect to truth.” The recommended approach is “curate before storage” — tagging content by authority level (Policy, Standard, Guideline, Opinion) before it enters the long-term store. Only governed, attributed content should enter permanent storage.
This is not a new principle. It is the principle that data catalogs have applied to data assets for years. The AI Memory Ingestion Pipeline page covers the certification gate in detail.
Retrieval-time composition
Permalink to “Retrieval-time composition”At inference time, the agent composes context from both tiers — pulling long-term semantic facts and short-term session history into the active window. The quality of this composition depends entirely on the quality of what is in each tier. Short-term quality is bounded to the current session. Long-term quality is bounded to what was certified before ingestion.
An agent with excellent short-term management and uncertified long-term memory will still return wrong answers whenever it retrieves from its semantic store. The retrieval layer cannot fix governance problems at the source layer.
For more on how these tiers interact with the RAG pattern, see AI Memory System vs RAG.
When to prioritize each
Permalink to “When to prioritize each”- Short-term optimization first: Voice AI agents, real-time customer service, high-frequency trading copilots — latency is the binding constraint. Short-term memory engineering (compression, eviction strategy, context window sizing) drives accuracy.
- Long-term investment first: Enterprise data agents, analytics copilots, multi-agent workflows where organizational knowledge must be consistent across agents. Governance quality of the long-term store drives reliability.
- Both simultaneously: Any enterprise agentic system that must be reliable, auditable, and consistent — the default for regulated industries. For enterprises operating AI systems classified as high-risk under the EU AI Act — which includes financial, healthcare, and infrastructure decision systems — the compliance deadline is August 2, 2026. High-risk AI systems must maintain full audit trails, a requirement that long-term memory without provenance tracking cannot satisfy. Penalties reach up to 35 million euros or 7% of global turnover.
How Atlan approaches long-term and short-term AI memory
Permalink to “How Atlan approaches long-term and short-term AI memory”Most enterprise teams build one vector database for both session context and organizational knowledge. Session-level observations and certified institutional facts end up in the same embedding space with no governance distinction. The result is what the research consistently documents: 95% of enterprise AI pilots delivered zero measurable ROI (MIT NANDA, 2025), and 8 in 10 companies cite data limitations — not model limitations, not retrieval architecture — as the primary blocker to scaling agentic AI (McKinsey, 2026).
McKinsey’s prescription is the conviction in enterprise strategy language: “one data foundation for analytics and AI — build data once, use everywhere, with clear common definitions.” The data catalog is that foundation. Long-term AI memory is a consumer of that foundation — not a parallel build.
Atlan’s context layer functions as the governed long-term semantic memory foundation for enterprise data agents. Not a memory framework to build, but a governed source to connect.
- Active Metadata Engine: Continuously updated metadata provides the freshness signals that solve long-term memory staleness at the source. Not by detecting decay after ingestion, but by maintaining live state in the governed catalog. This is the mechanism described in Active Metadata as AI Agent Memory.
- Certified business glossary: Human-certified metric definitions with version history, ownership records, and approval timestamps. The exact governance properties that long-term semantic memory requires but that no vector database can encode natively.
- Column-level lineage: Provenance tracking at the asset level — which table, which transformation, which certified output. The enterprise-grade equivalent of MemOS’s MemCube origin signatures, except it already exists and is maintained.
- Context Studio via MCP: Bootstraps agent long-term memory from existing dashboards, semantic definitions, and column-level lineage. Connection, not construction.
The performance case for this approach: adding organizational ontology as governed long-term context produced a 20% improvement in answer accuracy and 39% fewer tool calls in Atlan’s Snowflake integration research. Text-to-SQL accuracy improved 3x with governed metadata grounding versus bare schemas. These are retrieval-layer metrics produced by source-layer governance.
CME Group has cataloged 18 million-plus assets and maintains 1,300+ glossary terms — the governed long-term semantic memory that AI agents read from directly, rather than requiring bespoke ingestion pipelines. DigiKey’s Chief Data Officer describes Atlan as “a context operating system” — distinct from a catalog because it actively delivers context to AI models via MCP server, not just stores it.
See the full architecture in How Atlan’s Context Layer Functions as Enterprise Memory and the definitional framing in Memory Layer vs Context Layer.
How governed long-term context delivers accuracy that retrieval engineering alone cannot.
Download E-BookReal stories from real customers: memory layers in production
Permalink to “Real stories from real customers: memory layers in production”"We're excited to build the future of AI governance with Atlan. All of the work that we did to get to a shared language at Workday can be leveraged by AI via Atlan's MCP server…as part of Atlan's AI Labs, we're co-building the semantic layer that AI needs with new constructs, like context products."
— Joe DosSantos, VP of Enterprise Data & Analytics, Workday
"Atlan is much more than a catalog of catalogs. It's more of a context operating system…Atlan enabled us to easily activate metadata for everything from discovery in the marketplace to AI governance to data quality to an MCP server delivering context to AI models."
— Sridher Arumugham, Chief Data & Analytics Officer, DigiKey
Wrapping up
Permalink to “Wrapping up”The architectural distinction between short-term and long-term AI memory is not the insight — it is the starting point. The insight is what follows: these two memory types require fundamentally different governance architectures, and treating them as one system is the design mistake that produces confident-but-wrong enterprise AI at scale.
Short-term memory is tolerant of imperfect inputs. The session ends; the errors clear. Build fast, compress well, evict gracefully. The tooling — Mem0, LangMem, Letta — is mature and effective.
Long-term memory is intolerant of imperfect inputs. The errors do not clear; they compound. The right question is not “how do we build better long-term memory?” It is “how do we connect our agents to the authoritative source that is already governed?”
Gartner projects that 60% of AI projects will be abandoned due to context and data readiness gaps through 2026 — not model quality, not retrieval architecture. The data readiness problem is a long-term memory problem. Teams that connect to governed sources now avoid the replatforming cost of correcting ungoverned semantic stores later.
The context layer enterprise teams already maintain — with certified definitions, ownership, lineage, and freshness signals — is the long-term memory foundation enterprise agents need. The build problem was always a connection problem.
Evaluate where your long-term memory governance stands today and what to prioritize next.
Check Context MaturityReady to connect your long-term AI memory to a governed source of truth?
Book a DemoFAQs about long-term vs short-term AI memory
Permalink to “FAQs about long-term vs short-term AI memory”1. What is the difference between short-term and long-term AI memory?
Permalink to “1. What is the difference between short-term and long-term AI memory?”Short-term AI memory is session-scoped, lives in the context window, and resets when the conversation ends. Long-term AI memory lives in external stores — vector databases, knowledge graphs — and persists across sessions and agents. The deeper difference is governance: short-term memory is tolerant of unverified inputs because bad data expires with the session. Long-term memory is not tolerant — bad data persists, is retrieved with high similarity confidence, and compounds as the agent’s ground truth across every future interaction.
2. What is the difference between in-context and external AI memory?
Permalink to “2. What is the difference between in-context and external AI memory?”In-context memory is the attention window — everything the model is actively processing during a session. External memory is stored outside the model in vector databases, knowledge graphs, or key-value stores and retrieved at inference time. In-context memory is short-term: bounded by token limits and session duration. External memory is long-term: unbounded in scope but requiring certification for reliability, because persistence is the property that makes errors permanent.
3. Can AI long-term memory become outdated or incorrect?
Permalink to “3. Can AI long-term memory become outdated or incorrect?”Yes. Mem0’s 2026 report explicitly calls staleness in long-term memory “unresolved” — a memory about an organization’s metric definition, a user’s employer, or a business rule is “highly relevant until it is not, at which point it becomes confidently wrong.” Current framework solutions — decay functions, TTL, dynamic forgetting — address retrieval recency. They do not address source certification. The deeper fix for organizational semantic memory is connecting to a governed source that tracks freshness authoritatively rather than detecting decay after ingestion.
4. What is working memory in AI agents?
Permalink to “4. What is working memory in AI agents?”Working memory is the in-context buffer where agents hold current task state, tool outputs, and conversation history during a session. It is the AI equivalent of human short-term memory: bounded, fast, and temporary. The context window defines the working memory limit — typically 128K to 1M+ tokens depending on the model. When the window fills, eviction mechanisms (FIFO, recursive summarization, selective compression) determine which content is preserved.
5. What is episodic memory in AI agents?
Permalink to “5. What is episodic memory in AI agents?”Episodic memory stores records of past interactions, task outcomes, and user-specific history across sessions. It is a component of long-term memory, distinguished from semantic memory (facts about the domain or organization) and procedural memory (learned task patterns). Episodic memory is typically stored in vector databases or temporal knowledge graphs. Because it persists across sessions, it requires provenance tracking — at minimum, a record of when the fact was valid, and ideally a record of whether the source was authorized to assert it.
6. What is memory eviction in AI systems?
Permalink to “6. What is memory eviction in AI systems?”Eviction is the mechanism that removes information from the context window when it approaches its token limit. Common approaches: FIFO (oldest turns removed first), LRU (least recently used content removed), recursive summarization (Letta’s MemGPT approach — compress older content before removing it), and selective compression (Mem0 — extract salient facts, discard the rest for 80% token reduction). Evicted content may be discarded or promoted to long-term storage. The promotion gate — what gets written to the long-term store and with what governance — is the critical design decision most teams leave unresolved.
7. What are the governance requirements for enterprise AI long-term memory?
Permalink to “7. What are the governance requirements for enterprise AI long-term memory?”Enterprise long-term memory requires provenance (origin, source agent, certification status), versioning (what version of this fact is current), freshness signals (when was this last validated), access control (which agents can retrieve which facts), and ownership records (who is responsible for this fact’s accuracy). These are certification requirements, not retrieval engineering requirements. Data catalogs maintain all of these properties for governed data assets. The architectural implication is that enterprises with mature catalogs already have the governance infrastructure that long-term semantic memory requires — the question is whether they have connected their agents to it.
8. How do AI memory frameworks handle memory staleness?
Permalink to “8. How do AI memory frameworks handle memory staleness?”Current frameworks use decay functions, TTL, dynamic forgetting, or bi-temporal tracking. Zep/Graphiti’s bi-temporal model records valid_at and expired_at timestamps — the most governance-aware approach in the open-source ecosystem. These mechanisms address retrieval recency: they down-rank or expire memories based on time. They do not address source certification: knowing when a fact was inserted into memory does not answer whether the source was authorized to assert it, or whether the authoritative system has since updated the fact. For organizational semantic memory, the fix is connecting to a system that tracks freshness at the source.
Share this article
