TLDR:
Three frontier models, three different strengths. ChatGPT leads on ecosystem depth and enterprise compliance. Grok leads on live data access. DeepSeek leads on cost for technical workloads. The right pick depends on what your team does most and what your data policies allow.
Introduction:
In the past year, the AI space has seen rapid competition with new models entering almost every quarter. In early 2026, OpenAI shipped GPT-5.4, consolidating frontier coding, reasoning, and computer use into a single model for the first time. xAI followed with Grok 4.20, introducing a four-agent parallel architecture that runs specialized agents for reasoning, fact-checking, coding, and creative tasks simultaneously on every query. DeepSeek continued its rapid release cycle with V3.2 and R1-0528, each iteration cutting hallucination rates and widening its cost advantage further. With Grok 5, GPT-5.5, and DeepSeek V4 all expected in Q2 2026, the pace of change is showing no signs of slowing.
AI adoption is now common across enterprises. McKinsey reports that 88% of organizations use AI in at least one business function, and Gartner notes that AI is built into most new enterprise software. Users choose models based on what works best for tasks like coding, research, support, or everyday productivity.
In this blog, we compare Grok vs ChatGPT vs DeepSeek, exploring their features, strengths, limitations, and ideal use cases to help you decide which AI tool works best for you.
Key Takeaways
- The blog compares Grok, ChatGPT, and DeepSeek to show how each differs in strengths like ecosystem, cost, and real-time capabilities
- Each model is best suited for specific use cases, making selection dependent on business needs rather than overall performance rankings
- Output quality, structure, and reliability are more critical than benchmark scores for practical enterprise usage
- Real-time access, deep reasoning, and research depth create clear functional differences across the three models
- Enterprise adoption depends on governance, compliance, and integration, where Kanerika enables production-ready agentic AI deployment
Overview of Grok, ChatGPT, and DeepSeek
1. ChatGPT — General-Purpose AI With the Largest Ecosystem
ChatGPT is OpenAI’s flagship product and the entry point into AI for most businesses since 2022. The current model, GPT-5.4, launched in March 2026 and is the first in the GPT-5 family to handle coding, reasoning, and computer use within a single architecture rather than across separate specialist models. It is also the first model to exceed the human expert baseline on computer use tasks, scoring 75% on OSWorld against a human benchmark of 72.4%.
- Context window: Up to 1.05M tokens, the largest in this comparison
- Reasoning effort: Five configurable levels let teams balance speed and depth per request without switching model tiers
- Computer Use API: Native desktop interaction — the model can see screens, click, type, and navigate applications
- Compliance: SOC 2 Type 2, GDPR, and CCPA alignment included at the Business tier
What keeps ChatGPT dominant is less about any single capability and more about ecosystem depth. Its API is the most widely adopted in the industry, and the developer tooling, third-party integrations, and enterprise software built around it are unmatched by either competitor.
2. DeepSeek — Open-Source Reasoning at a Fraction of the Cost
DeepSeek is a Chinese AI lab that disrupted the market in January 2025 when it released R1, a reasoning model that rivaled OpenAI’s o1 performance at a training cost of approximately $5.5 million. The model family has evolved rapidly since. The current stable releases are DeepSeek V3.2 for general use and R1-0528 for reasoning-heavy tasks, both updated significantly from the original versions that drew early criticism for hallucination rates.
- Current models: V3.2 (December 2025) for general tasks; R1-0528 (May 2025) for structured reasoning
- Architecture: 671B total parameters, only 37B active per inference pass via Mixture-of-Experts
- Licensing: MIT license with full model weights publicly released, enabling complete self-hosting
- API entry point: $0.0008 per 1,000 input tokens for V3 — the lowest of the three by a significant margin
That efficiency is the structural reason why DeepSeek runs at a fraction of the cost of proprietary alternatives. Organizations can also self-host it on their own infrastructure, which changes the data residency equation for enterprises that would otherwise rule it out on security grounds.
3. Grok — Real-Time Intelligence From xAI
Grok is xAI’s AI model, built by Elon Musk’s company and tightly integrated with the X platform. The current release is Grok 4.20, which introduced a genuinely different architecture: four specialized AI agents running in parallel on every complex query, debating outputs before producing a single answer. Grok handles general reasoning, Harper covers fact-checking and real-time X data, Benjamin focuses on logic and coding, and Lucas handles creative reasoning. This is built into the inference layer rather than a user-orchestrated framework.
- Four-agent architecture: Parallel agents debate and verify outputs before a single response is produced
- DeepSearch and DeeperSearch: Multi-source live web synthesis, with extended reasoning layered on top
- X indexing: Social content indexed in real time — sentiment, trending topics, and public discourse signals
- Current release: 500B parameter variant; full model still completing training; API live since April 2025
Grok’s clearest differentiation is live data access. No other model in this comparison is built around current information by default — and for functions where what happened yesterday changes the answer, that architectural difference is meaningful.
Output Quality Comparison
1. Performance Across Key Benchmarks
All three models are operating at frontier level. The gaps between them on any given benchmark are smaller than the gaps between use cases, so the scores matter less than which tasks those scores actually reflect.
GPT-5.4 leads on SWE-bench Verified at 80% and is the first model to exceed the human expert baseline on computer use tasks. Grok 4 sits at 75% on SWE-bench, and DeepSeek R1-0528 hit 87.5% on AIME 2025, up from 70% in the original release. Each model leads in a different domain, which is exactly why the choice depends on primary use case.
- GPT-5.4: 80% SWE-bench Verified, 75% OSWorld, strong across reasoning and computer use
- Grok 4/4.20: 75% SWE-bench, highest Arena Elo among current frontier models
- DeepSeek R1-0528: 87.5% AIME 2025, strongest on structured mathematical reasoning
Benchmark environments are controlled. Production usage involves ambiguous prompts, mixed data types, and tasks that do not map to any single evaluation framework. The numbers are a starting point, not a verdict.
2. Hallucination Rates and Factual Accuracy
Hallucination is where the models diverge more meaningfully for enterprise use. DeepSeek R1’s original release showed a 14.3% hallucination rate in Vectara’s evaluation, four times wider in standard deviation than its predecessor DeepSeek V3. The May 2025 update, R1-0528, addressed this directly — DeepSeek documented a 45 to 50% reduction in hallucination on summarization and structured tasks. V3.2 continued that improvement.
GPT-5.4 maintains strong factual consistency across general knowledge domains, reflecting years of alignment investment from OpenAI. Grok’s outputs are technically strong but have carried documented accuracy concerns, particularly around politically sensitive topics where the model has consulted Musk’s views before responding. xAI has committed to corrections but the track record requires scrutiny for enterprises where output reliability matters.
For teams where downstream accuracy has legal, financial, or compliance consequences, GPT-5.4 carries the most mature and documented accuracy posture of the three.
3. Response Usability for Business Tasks
Benchmark performance and usability are not the same thing. GPT-5.4 produces outputs that are well-structured and require minimal reformatting for professional use. Its tone is calibrated, its structure is predictable, and its multimodal handling makes it effective across document types without separate tooling.
DeepSeek R1’s visible chain-of-thought is useful for verification but adds friction for output types where prose is expected. Grok’s outputs can drift toward informality or opinion, depending on the prompt and topic area, which introduces editing overhead for content-forward teams.
Output quality sets the floor. Where the models diverge more practically is in how they handle information — specifically, whether they can access current data at all.
Advance AI-Driven Business Transformation
Discover how AI is reshaping modern enterprises and driving measurable business impact.
Real-Time Data and Research Capabilities
1. Grok — Live Data Access as a Core Capability
Grok is the only model in this comparison built around real-time information by default. DeepSearch and DeeperSearch synthesize findings across multiple live sources simultaneously, with the Harper agent running in parallel specifically for fact-checking and X data verification. The X integration means Grok indexes social content as it happens — giving it live sentiment signals and early reads on emerging narratives that neither ChatGPT nor DeepSeek can surface from static training alone.
For analyst teams tracking markets, PR functions monitoring brand mentions, or researchers covering fast-moving topics, this architecture is a structural fit that the other two models approximate at best.
- DeepSearch: Multi-source live web synthesis on every query
- DeeperSearch: Extended reasoning layered on top of live search results
- X indexing: Real-time social sentiment, trending content, and public discourse signals
2. DeepSeek — Depth Over Currency
DeepSeek’s strength is the inverse of Grok’s. It does not retrieve live web content by default, which makes it a poor fit for time-sensitive research. Where it earns its place is in deep analysis over provided documents, technical papers, or large codebases. Its chain-of-thought output makes the reasoning path visible and auditable, which matters in research contexts where methodology is as important as the conclusion. Its training also covers Chinese-language sources more broadly than either competitor, extending its usefulness for globally-oriented research teams.
- Where it fits: Large document corpora, technical research, and structured datasets where the methodology needs to be auditable
- Chain-of-thought: Every reasoning step is visible, which makes it easier to catch errors before acting on an output
- Multilingual coverage: Trained more broadly on Chinese-language sources than either competitor, useful for globally-oriented research teams
3. ChatGPT — Broad Coverage, Limited Live Depth
ChatGPT supports web browsing on paid tiers, but it retrieves content per query rather than synthesizing across multiple live sources in parallel. For most general research tasks where current data is not the primary requirement, GPT-5.4’s training breadth gives it solid coverage. The gap with Grok becomes clear specifically on tasks where information from the last 24 to 48 hours changes the answer.
- Browsing depth: Paid tiers include web access, but it retrieves one source at a time — not the parallel synthesis Grok runs by default
- Where it fits: Research tasks where training breadth matters more than data published in the last 48 hours
Research capability is one dimension. For technical and engineering teams, coding performance is often the deciding factor.
Coding and Technical Performance
Benchmark Results and How They Apply to Engineering Work
GPT-5.4 leads on SWE-bench Verified at 80% and absorbs the capabilities of the previous GPT-5.3-Codex model, meaning frontier coding performance now sits in the same model as reasoning and computer use. For teams that previously juggled separate specialist models, that consolidation reduces infrastructure complexity. Grok 4’s Benjamin agent runs in parallel on every complex query, dedicated to logic and coding tasks, and is competitive at 75% on the same benchmark. DeepSeek R1-0528 earns its place through auditable chain-of-thought output and the lowest API cost in the group for automated code generation at volume.
For most engineering teams, ChatGPT remains the lowest-friction choice. The ecosystem advantage compounds over time.
| Metric | GPT-5.4 | Grok 4.20 | DeepSeek R1-0528 |
|---|---|---|---|
| SWE-bench Verified | 80% | 75% | Competitive |
| Language coverage | Broadest | Strong | Strong |
| API cost per 1M tokens | $2.50 input / $15 output | Higher | $0.80 input |
| Developer ecosystem | Largest | Growing | Smaller |
| Best for | General engineering, broad dev tasks | Complex reasoning-heavy code | High-volume technical pipelines |
Pricing and Cost Comparison
All three platforms have free tiers, but none are practical for sustained business use. ChatGPT limits free users to roughly 10 GPT-5.4 messages before fallback. DeepSeek’s web platform gives free access to V3.2 and R1-0528 with a cap of 50 DeepThink messages per day. Grok is available free on X but gates Think Mode and DeepSearch behind paid plans.
The real differences emerge at the paid and API level. ChatGPT Plus costs $20 per month with GPT-5.4 Thinking and an 80-message-per-3-hour cap. The Pro plan at $200 per month removes limits and unlocks the highest reasoning tier. The Business plan at $25 per user per month adds admin controls, SAML SSO, and data isolation by default. Grok’s SuperGrok at $30 per month unlocks all reasoning modes and DeepSearch. DeepSeek’s API starts at $0.0008 per 1,000 input tokens for V3 — among the lowest in the market by a significant margin.
The cost gap widens sharply at scale. GPT-5.4 Standard API is priced at $2.50 per million input tokens, with the Pro tier jumping to $30 per million — a 12x increase for dedicated compute allocation. DeepSeek V3 at $0.80 per million input tokens is more than three times cheaper than ChatGPT Standard, and orders of magnitude cheaper than GPT-5.4 Pro. For teams running automated pipelines at high volume, the annual cost difference between GPT-5.4 Pro and DeepSeek can be substantial.
| Tier | ChatGPT | DeepSeek | Grok |
|---|---|---|---|
| Free | Limited GPT-5.4, ~10 msg then fallback | V3.2 and R1 web, 50 DeepThink/day | Basic on X |
| Entry paid | $20/month (Plus) | API from $0.0008/1K tokens | $30/month (SuperGrok) |
| Pro | $200/month | Custom | Via xAI API |
| Business/Team | $25/user/month | Custom | Via Azure (Grok 3+) |
| API standard input | $2.50/million tokens | $0.80/million tokens | Higher, varies |
Limitations of Grok, ChatGPT, and DeepSeek
1. Grok — Platform Lock-In and a Patchy Moderation Record
Access and Ecosystem Constraints
Grok’s most capable features sit behind subscriptions tied to X, creating procurement friction that neither ChatGPT nor DeepSeek introduces. The xAI API, available since April 2025, opens a path for developers, but the integration ecosystem is still a fraction of ChatGPT’s in terms of third-party tooling, IDE plugins, and community documentation. The current Grok 4.20 is also a partial release — the 500B parameter variant — with the full model still completing training and broader API access pending.
- Access model: Think Mode, DeepSearch, and DeeperSearch all require SuperGrok or X Premium+ — there is no way to access Grok’s core differentiators without paying into the X ecosystem
- Ecosystem gap: Significantly fewer third-party integrations and IDE plugins than ChatGPT; community documentation is still thin compared to years of ChatGPT tooling
Content Moderation Track Record
Grok has produced antisemitic outputs, praised Hitler in responses, and was caught consulting Elon Musk’s political views before answering sensitive questions. xAI has addressed some of these incidents, but the pattern is documented. For enterprises where output reliability and brand safety are non-negotiable, the track record needs to be part of any evaluation.
- Documented incidents: Antisemitic outputs and politically skewed responses in 2025
- Mitigation: xAI has committed to corrections; ongoing monitoring is still warranted
2. ChatGPT — Expensive at the Top and Shallow on Live Research
Cost at Scale
ChatGPT is the most expensive of the three at volume. GPT-5.4 Pro API is priced at $30 per million input tokens and $180 per million output tokens. Teams running high-volume automated workflows on Pro-level reasoning will hit cost pressure that DeepSeek, in particular, does not create. The Plus plan also caps at 80 messages per 3 hours, which creates friction for continuous use cases that need consistent throughput.
- API cost: GPT-5.4 Pro at $30/$180 per million tokens, highest in this comparison
- Usage limits: 80 messages per 3 hours on Plus before throttling
Research Depth Limitations
ChatGPT’s browsing tool retrieves content per query. It does not perform the parallel multi-source synthesis that Grok’s DeepSearch does, which means research quality drops on tasks where synthesizing current information from several sources simultaneously matters. The 272K token context surcharge also adds unpredictable cost for teams processing large document sets at scale.
- Browsing: Per-query retrieval only; not built for parallel live synthesis
- Context surcharge: Input costs double above 272K tokens
3. DeepSeek — Data Sovereignty Risk That Does Not Go Away Quietly
Data Residency Risk
DeepSeek’s standard API routes data through servers in China. For organizations in financial services, healthcare, or government, this is disqualifying without architectural controls. Bradley Shimmin, Analyst at Omdia, publicly advised against logging into DeepSeek directly. Australia banned it for government use in 2025, and similar restrictions are in place across UK and EU enterprise contexts.
- Standard API risk: Data sent through DeepSeek’s API is processed on servers in China, with no documented data handling agreements that meet Western enterprise compliance standards
- Government restrictions: Banned for government use in Australia (2025); enterprise restrictions in place across UK and EU contexts
Self-Hosting as the Enterprise Path
The MIT license gives organizations a viable alternative. Self-hosting DeepSeek V3.2 or R1-0528 on private infrastructure eliminates the data residency issue entirely. The requirement is steep — a minimum of 8 NVIDIA H200 GPUs with 141GB of memory each — but for technically resourced enterprises, it removes the primary objection and preserves the cost advantage.
- Self-hosting: Running DeepSeek on private infrastructure removes the data residency problem entirely — the MIT license makes this legal and the architecture supports it
- Infrastructure floor: A minimum of 8 NVIDIA H200 GPUs with 141GB of memory each is required for the full model; this is a meaningful capital commitment before any deployment work begins
Understanding the limitations of each model makes the use case decision cleaner.
Which AI Tool Fits Your Use Case?
1. Content and Marketing Teams
ChatGPT remains the strongest fit for content-heavy functions. GPT-5.4’s outputs are polished, well-structured, and require minimal reformatting before publication. Its breadth across content types and multimodal support for image and audio inputs makes it a practical all-in-one tool for most content teams. For teams that also need real-time market context, Grok’s X integration and Harper agent add a research layer that ChatGPT cannot match natively.
- ChatGPT: Strongest for writing quality, editorial consistency, and output polish across formats
- Grok as a layer: Worth adding specifically for social listening, trending topic research, and real-time narrative tracking — not as a replacement for ChatGPT but alongside it
2. Developer and Engineering Teams
Most engineering teams will default to ChatGPT for good reason — ecosystem, documentation, IDE integrations, and GitHub compatibility are all mature and well-supported. But the choice shifts at the edges. Grok 4.20’s Benjamin agent and Big Brain Mode make it worth evaluating for complex domain-specific engineering where reasoning depth matters more than tooling convenience. DeepSeek belongs in any conversation about high-volume automated code generation pipelines where API cost is the binding constraint.
| Priority | Best fit |
|---|---|
| Broad engineering and general development | ChatGPT (GPT-5.4) |
| Complex STEM and algorithm-heavy tasks | Grok 4.20 |
| High-volume automated code generation | DeepSeek V3.2 / R1-0528 |
3. Research and Analytical Functions
Grok is the strongest choice for research requiring live information. Its four-agent architecture, with Harper dedicated to real-time fact-checking and X data, gives it a structural advantage on tasks where currency matters. DeepSeek R1-0528 is better suited to structured analysis over fixed datasets — its visible reasoning chain makes it effective for mathematical and document-heavy research where the methodology needs to be auditable. ChatGPT covers both adequately but leads in neither.
- For live research and intelligence: Grok 4.20 (DeepSearch, X integration, Harper agent)
- For structured analysis over fixed data: DeepSeek R1-0528 (chain-of-thought, strong STEM reasoning)
- For broad general research: ChatGPT (GPT-5.4, large training base, multimodal input)
4. Enterprises With Security and Compliance Requirements
This is the clearest decision in the comparison. ChatGPT’s Business and Enterprise tiers come with SOC 2 Type 2, GDPR and CCPA alignment, SAML SSO, MFA, and data isolation by default — the most documented and procurement-ready compliance posture of the three. DeepSeek’s standard API is off the table for most regulated deployments, with self-hosting the only viable path. Grok introduces procurement complexity through its X dependency and newer enterprise track record.
For financial services, healthcare, and enterprise software, ChatGPT Business at $25 per user per month is the starting point. Everything else requires additional architectural work before it clears procurement.
| Use Case | Best Fit | Alternative |
|---|---|---|
| Long-form content and marketing | ChatGPT (GPT-5.4) | Grok (research layer) |
| Real-time market and social intelligence | Grok 4.20 | ChatGPT (browsing) |
| High-volume automated code generation | DeepSeek V3.2 / R1-0528 | ChatGPT |
| Complex engineering and algorithm tasks | GPT-5.4 / Grok 4.20 | DeepSeek |
| Regulated industry deployment | ChatGPT Business/Enterprise | DeepSeek (self-hosted only) |
| Cost-sensitive STEM workloads | DeepSeek | Grok Mini |
| General business automation | ChatGPT | DeepSeek |
| Structured academic or technical research | DeepSeek R1-0528 | Grok 4.20 |
How Kanerika Builds Production-Ready AI Agents for Enterprises
Kanerika designs and deploys production-ready AI agents tailored for enterprise use across industries such as financial services, healthcare, manufacturing, and logistics. Its solutions—KARL for data insights, DokGPT for document intelligence, Susan for PII redaction, and Alan for legal document summarization—are built for specific business functions rather than repurposed from generic AI tools. Each agent seamlessly integrates with existing systems like data pipelines, CRMs, ERPs, and cloud platforms, and is trained on structured enterprise data from the outset.
A key differentiator in Kanerika’s approach is its governance-first architecture. Every deployment includes role-based access controls, audit trails, and compliance documentation aligned with industry regulations. The company holds ISO 9001, ISO 27001, and ISO 27701 certifications, with HIPAA and SOC 2 compliance embedded into projects for regulated sectors from the start.
As a Microsoft Solutions Partner for Data & AI and a Microsoft Fabric Featured Partner, Kanerika leverages Azure, Microsoft Fabric, and the broader Microsoft ecosystem to build scalable solutions. For enterprises exploring agentic AI, Kanerika provides a streamlined path from proof-of-concept to full production—without the delays caused by retrofitting governance later.
Case Study: AI Powered Clienteling for Personalized In Store and Online Experiences
Challenges:
The retailer lacked a unified view of customer data across stores and digital channels. Store associates relied on intuition instead of insights, which led to generic recommendations and missed upsell opportunities. Online and in store experiences felt disconnected, making it hard to build loyalty with high value customers. Manual processes also limited how quickly associates could act on customer preferences.
Solutions:
Kanerika implemented an AI powered clienteling solution that unified customer profiles across online and offline touchpoints. The system analyzed purchase history, browsing behavior, preferences, and engagement signals to generate personalized recommendations. Store associates received real time insights through an intelligent assistant, enabling tailored conversations and product suggestions. The same intelligence was extended to digital channels to ensure consistent personalization everywhere.
Results:
32% increase in average order value
27% improvement in repeat purchase rates
40% higher engagement from high value customers
Conclusion
Grok, ChatGPT, and DeepSeek are built for different things. ChatGPT wins on ecosystem depth and compliance, Grok wins on live data access, and DeepSeek wins on cost for technical workloads. That said, no single model covers every use case equally well. The right choice comes down to what your team does most and what your data policies allow.
For enterprises, those two factors narrow the field faster than any benchmark comparison. And as Grok 5, GPT-5.5, and DeepSeek V4 approach release in Q2 2026, the gap between these platforms is likely to shift again — making it worth revisiting this decision sooner than most teams expect.
Transform Your Business with AI-Powered Solutions!
Partner with Kanerika for Expert AI implementation Services
FAQs
Which is better, Grok, ChatGPT, or DeepSeek?
There is no single winner. ChatGPT is the most versatile and has the strongest enterprise compliance posture. Grok leads on real-time data access and live research. DeepSeek is the most cost-efficient for technical and reasoning-heavy workloads. The right choice depends on your primary use case, budget, and data policies.
Is DeepSeek safe to use for business?
DeepSeek’s standard API routes data through servers in China, which raises data residency concerns for regulated industries. Organizations in financial services, healthcare, and government should avoid the standard API without additional controls. Self-hosting under the MIT license is a viable alternative but requires significant infrastructure investment.
Is Grok better than ChatGPT for research?
For research that depends on current information, yes. Grok’s DeepSearch and DeeperSearch synthesize live web sources in parallel, and its X integration surfaces real-time social signals. For research that does not require data from the past 24 to 48 hours, ChatGPT’s training breadth is sufficient and often easier to work with.
Which AI model is cheapest to use at scale?
DeepSeek is the most cost-efficient by a significant margin. Its V3 API starts at $0.0008 per 1,000 input tokens, compared to ChatGPT Standard at $2.50 per million tokens and GPT-5.4 Pro at $30 per million. For teams running high-volume automated pipelines, the annual cost difference can be substantial.
Can I use these AI models for free?
All three offer free tiers, but none are practical for sustained business use. ChatGPT limits free users to around 10 GPT-5.4 messages before fallback. DeepSeek allows free web access with a cap of 50 DeepThink messages per day. Grok is available free on X, but Think Mode and DeepSearch require a paid plan.
Is Grok available for enterprise API use?
Yes. xAI launched its API in April 2025, giving developers programmatic access to Grok models. Enterprise deployment is also available through Azure for Grok 3 and above. That said, the integration ecosystem is still significantly smaller than ChatGPT’s, and advanced features like DeepSearch require a SuperGrok subscription on top of API access.
Can DeepSeek be used in regulated industries?
Not via its standard API. Data processed through DeepSeek’s default API routes through servers in China, which conflicts with data residency requirements in financial services, healthcare, and government sectors. The viable path for regulated industries is self-hosting under the MIT license on private infrastructure — this eliminates the residency issue entirely but requires a minimum of 8 NVIDIA H200 GPUs, making it a meaningful capital commitment before any deployment begins.
How do these models handle multilingual or non-English workloads?
ChatGPT has the broadest multilingual coverage across general business languages and performs consistently across European and Asian language tasks. DeepSeek has deeper coverage of Chinese-language sources than either competitor, making it a stronger fit for teams working across Chinese-language datasets or regional markets. Grok’s multilingual performance is competitive but less documented outside English.



