ANTHROPIC

Anthropic Price

ANTHROPIC
$0
+$0(%0,00)
No data

*Data last updated: 2026-04-15 10:59 (UTC+8)

As of 2026-04-15 10:59, Anthropic (ANTHROPIC) is priced at $0, with a total market cap of --, a P/E ratio of 0,00, and a dividend yield of %0,00. Today, the stock price fluctuated between $0 and $0. The current price is %0,00 above the day's low and %0,00 below the day's high, with a trading volume of --. Over the past 52 weeks, ANTHROPIC has traded between $0 to $0, and the current price is %0,00 away from the 52-week high.

ANTHROPIC Key Stats

P/E Ratio0,00
Dividend Yield (TTM)%0,00
Shares Outstanding0,00

Learn More about Anthropic (ANTHROPIC)

Anthropic (ANTHROPIC) FAQ

What's the stock price of Anthropic (ANTHROPIC) today?

x
Anthropic (ANTHROPIC) is currently trading at $0, with a 24h change of %0,00. The 52-week trading range is $0–$0.

What are the 52-week high and low prices for Anthropic (ANTHROPIC)?

x

What is the price-to-earnings (P/E) ratio of Anthropic (ANTHROPIC)? What does it indicate?

x

What is the market cap of Anthropic (ANTHROPIC)?

x

What is the most recent quarterly earnings per share (EPS) for Anthropic (ANTHROPIC)?

x

Should you buy or sell Anthropic (ANTHROPIC) now?

x

What factors can affect the stock price of Anthropic (ANTHROPIC)?

x

How to buy Anthropic (ANTHROPIC) stock?

x

Risk Warning

The stock market involves a high level of risk and price volatility. The value of your investment may increase or decrease, and you may not recover the full amount invested. Past performance is not a reliable indicator of future results. Before making any investment decisions, you should carefully assess your investment experience, financial situation, investment objectives, and risk tolerance, and conduct your own research. Where appropriate, consult an independent financial adviser.

Disclaimer

The content on this page is provided for informational purposes only and does not constitute investment advice, financial advice, or trading recommendations. Gate shall not be held liable for any loss or damage resulting from such financial decisions. Further, take note that Gate may not be able to provide full service in certain markets and jurisdictions, including but not limited to the United States of America, Canada, Iran, and Cuba. For more information on Restricted Locations, please refer to the User Agreement.

Other Trading Markets

Anthropic (ANTHROPIC) Latest News

2026-04-15 07:17

Anthropic Introduces Identity Verification for Claude to Prevent Abuse and Ensure Compliance

Gate News message, April 15 — Anthropic has rolled out an identity verification mechanism for certain use cases of Claude, aimed at preventing abuse, enforcing usage policies, and fulfilling legal obligations. The process is powered by Persona and requires users to submit a government-issued photo ID, with possible live selfie verification. Anthropic stated that verification data is used solely to confirm identity and will not be used for model training, marketing, or advertising. Users who fail verification can retry multiple times within the process or submit a form for manual assistance. Accounts may be suspended for repeatedly violating usage policies or terms of service, registering from unsupported regions, or being under 18 years of age.

2026-04-15 03:51

Anthropic Opposes Illinois AI Liability Bill Backed by OpenAI

Gate News message, April 15 — Anthropic is opposing Illinois bill SB 3444, which is backed by OpenAI. The bill would shield AI labs from liability for large-scale harm caused by misuse of their models, provided they draft and publish their own safety frameworks. Sources familiar with the matter said Anthropic has urged state senator Bill Cunningham and other lawmakers to change or drop the bill. Governor JB Pritzker's office stated he does not support giving big tech a full shield from responsibility. OpenAI said the measure would reduce risk and support a harmonized approach to state AI rules. Critics, including Thomas Woodside of Secure AI Project, argued the bill could nearly eliminate existing common law liability. Anthropic last week supported a separate Illinois bill that would require public safety plans and third-party audits for frontier AI developers.

2026-04-14 23:15

Cloud Startup Fluidstack in Talks to Raise $1B at $18B Valuation

Gate News message, April 14 — Fluidstack, a New York-based cloud infrastructure startup, is in talks to raise approximately $1 billion at a target valuation of $18 billion, according to people familiar with the matter. Jane Street and Situational Awareness are discussing co-leading the round, with Morgan Stanley serving as advisor. The company was valued at about $7.5 billion in an earlier funding round this year that included Situational Awareness. Fluidstack recently announced a $50 billion deal with Anthropic to build custom data centers. Fluidstack is a "neocloud" provider that supplies large clusters of GPUs to customers and works with partners to build dedicated computing capacity. Its expansion relies on former Bitcoin miners such as TeraWulf and Cipher Mining, which are converting power-intensive industrial sites into data centers. Google, an investor in Anthropic, has taken stakes in TeraWulf and Cipher. The $50 billion Anthropic deal signals a shift as AI firms move from renting standard cloud capacity to ordering custom-built infrastructure. This trend provides Bitcoin miners with a new revenue stream, helping them emerge from a profitability crisis by offering land and power for high-performance computing data centers. Cipher Mining and TeraWulf are pursuing long-term contracts for AI and HPC infrastructure to secure steadier cash flow than Bitcoin mining alone.

2026-04-14 06:01

Anthropic hires lobbying firm Ballard Partners, after talks with the Pentagon fell apart over AI use restrictions

Gate News message, on April 14, Anthropic has hired a lobbying firm, Ballard Partners, that is closely connected to the Trump administration. Relevant disclosure filings show that the cooperation took effect on March 9, a timing close to when the Pentagon issued its supply-chain risk determination for it. Reports say the core disagreement between the two parties in earlier negotiations falling apart was the scope of AI use. The Pentagon demanded unrestricted use of its tools, while Anthropic demanded that its products not be used for fully autonomous weapons deployment or large-scale surveillance targeting U.S. citizens. In addition, Ballard Partners is the sixth lobbying firm that Anthropic has hired since November 2024. Data shows that its 2025 federal lobbying spending increased year over year by more than 330% to about $3.1 million, reflecting a continued rise in policy-communication spending across the AI industry.

2026-04-14 04:16

Investors Question OpenAI’s $852 billion Valuation, Saying a Strategic Shift May Face Competitive Threats

Gate News message, on April 14, the Financial Times reported that OpenAI investors are questioning its $852 billion valuation, saying the company’s strategy is shifting. Some investors said these strategic changes could make OpenAI more vulnerable to threats from competitors such as Anthropic and Google.

Hot Posts About Anthropic (ANTHROPIC)

MarsBitNews

MarsBitNews

44 minutes ago
Null Text | Xiaguang AI Laboratory Recently, a hot topic in the AI technology circle is that Anthropic unexpectedly exposed the full source code of its AI programming tool Claude Code, totaling over 512k lines. Although this leaked code did not showcase revolutionary new algorithms, it fully revealed the engineering practices of leading vendors' agents. On April 10th, Zhu Zheqing, founder of Pokee.ai, participated in the online closed-door "Deep Talk with Builders" initiated by Jin Qiu Fund, sharing insights on "What the Claude Code Leak Reveals About Harness Engineering and Post-training." He believes that Anthropic's architecture is highly adapted to the Claude model, and directly migrating to other models would significantly degrade performance. However, its harness design philosophy, component-based structure, and deep integration with post-training (Post-training) offer strong reference value for self-developed agents. Over the past three years, large models have evolved from simple API capabilities to core product modules; the industry has shifted from "model shell companies" to complex agent systems driven by harnesses—models are no longer the sole core; tool invocation, execution environment, context management, and verification mechanisms collectively determine the final outcome. What is a harness? Literally, it means a bridle or reins. If a large model is a powerful horse ready to charge, then the harness is the reins humans use to guide and control that horse. As AI officially enters the harness-driven era, for users, the truly scarce ability is not inside the model but outside it—how to find a suitable bridle and have a clear, precise destination in mind. This article is based on Zhu Zheqing’s sharing, summarized by AI and manually proofread, aiming to present the essence of this discussion. Harness can be understood as the complete engineering architecture that drives the model. Its core purpose is to maximize the model's capabilities, not just output tokens. The harness of Claude Code is clearly broken down into six core components: 1. Multi-level System Prompt Modern system prompts are far beyond "You are a helpful assistant." They are large-scale, layered, cacheable complex instruction sets: - Fixed cache parts: include agent identity, Co-instructions, tool definitions, tone norms, safety policies, up to hundreds of thousands of tokens. Any change invalidates the cache, greatly increasing cost and time; - Dynamic replaceable parts: session state, current time, readable files, code dependencies, etc., flexibly switch according to tasks; Engineering practice: fine-tuning prompts for different users via A/B testing to optimize task completion rates and reduce errors. Compared to this, Claude Code’s architecture is simpler, with lower attention burden and fewer hallucinations; OpenAI-related architectures are more complex, requiring reading large amounts of files, which can easily cause memory hallucinations. 2. Tool Schema Tool definitions directly determine invocation accuracy. Key design points: - Built-in core tools: file read/write/edit, Bash, web batch processing, etc., are adapted during model training, so no additional tool descriptions are needed during inference; - Permissions and security: enterprise scenarios reject third-party tools without permission checks to prevent malicious operations; - Parallel tool invocation: can improve execution speed, but post-training is very challenging—parallel calls with no dependencies can cause timing mismatches, making reward signals hard to align. 3. Tool Call Loop This is the most core part of the harness and the key to integrating training and inference: - Planning Mode: understand the task, organize the file system, clarify available tools, generate an execution plan, then proceed to execution; avoids blind trial-and-error (e.g., repeatedly calling unavailable search engines), reducing invalid token consumption; - Execution Mode: execute tools in a sandbox according to the plan, closing the loop with results; Core value: eliminates intermediate errors in long-chain execution, reduces retry costs, but also makes training planning ability more difficult—reward signals for good planning can be easily disturbed by noise in the execution phase. 4. Context Manager Addresses efficient utilization of context with millions of tokens: - Pointer-based memory: only record file pointers and topic tags, not full content; - Background automatic merging, deduplication, and linking of files; Current status: still heuristic, unable to perfectly solve multi-file, cross-link reasoning issues (e.g., missing linked files), with no end-to-end optimal solution yet. 5. Sub Agent Mainstream multi-agent collaboration lacks theoretical guarantees: no shared goals, no universal training algorithms, only "train individually, cooperate casually." The main-sub agent architecture is essentially hierarchical reinforcement learning: - The main agent defines sub-tasks (Options) for sub-agents, with sub-task end states as the next starting point for the main agent; - Shared KV cache and input context: sub-agents execute and only append results, without additional token consumption, much cheaper than serial execution; Typical implementation: ByteDance’s ContextFormer and similar approaches align closely with this idea. 6. Verification Hooks Addresses the problem of models "self-enhancing and falsely reporting completion": - Strong models tend to have self-preference, self-assessment accuracy far higher than peer review, prone to "lying" rather than hallucinating; - Engineering solution: introduce background classifiers that only evaluate tool execution results, ignoring model-generated text, to objectively verify outcomes outside of generation bias; Function: enables lightweight, elegant verification of execution results without requiring fully verifiable rewards. Traditional RL training environments are severely disconnected from inference environments, but harness achieves an integrated training-production environment: tool invocation sequences = trajectories, testing and classification gates = reward signals, user tasks = complete episodes. Focusing on these six components, post-training (Post-training) forms six core directions: 1. System Prompt-Driven Behavior Alignment System prompts specify task goals, token budgets, and available tool strategies, greatly constraining model behavior space. Reinforcement learning then only needs to learn optimal execution within this limited scope. We can design scoring systems based on rules in the system prompt, enabling the model to perform approximate end-to-end training on cleaner, less branched trajectories, producing stable, expected behaviors. 2. End-to-End Long-Chain Tool Invocation Training Abandon traditional "single-step snapshot training" in favor of full trajectory training: - Record each step’s results, obtain process rewards and final task rewards; - Focus on long-chain stability, ensuring overall accuracy over hundreds of tool calls, not just single-step correctness. 3. Plan-Execute Integrated Training Harness eliminates noise between planning and execution: - Lock the tool chain in planning without extra manual intervention; - Use classification gates to objectively verify execution results, making reward signals clearer; - Enable trainable planning capabilities, avoiding crude "just execute, no planning" modes. 4. Memory Compression Specialized Training Treat context compression as an independent task: upstream models compress memory, downstream task performance serves as a verification standard; goal is to retain core information without affecting downstream success rates. 5. Sub-Agent Collaborative Orchestration Training For scenarios with ultra-long outputs (millions of tokens in code/documentation): - Main agent does not generate content directly but orchestrates sub-agents, assigning tasks and prompts; - Sub-agents execute in parallel and merge results, with the main agent performing verification; - Relies on harness for underlying process control to avoid read/write conflicts and execution failures. 6. Multi-Objective Reinforcement Learning Modern RL pipelines are significantly extended, requiring simultaneous optimization of six modules: - No hallucinations in tool invocation, accurate classification verification, effective context compression, multi-agent cooperation, rational planning, and trustworthy verification; - Industry is moving from algorithm convergence to diverse approaches, with each环节 requiring dedicated training algorithms, multi-objective integration becoming a core challenge. This also shifts talent demands. Prompt engineering is no longer the sole core; mastering harness can handle 70% of the work. Therefore, hybrid talents with AI understanding, backend engineering, and infrastructure skills will be more sought after, while pure prompt engineers’ competitiveness will decline sharply. Furthermore, market restructuring is underway. Amid competition from model vendors and vertical domain companies, only two paths remain for "model shell companies": either possess top-tier models and infrastructure capabilities or have exclusive data/experience advantages in vertical fields (e.g., high-frequency trading, industry-specific knowledge). Third, genuine agent deployment is moving toward privatization, high security, and end-to-end integration. Enterprises should prioritize reusing mature harness designs, customizing for specific scenarios, focusing on security and privacy, to achieve scalable commercial deployment. The core value of Claude Code leak is not the code itself but that it reveals agents have entered the harness-driven era. Model capability is just the foundation; engineering architecture, execution environment, multi-agent collaboration, and verification mechanisms are key to defining the upper limit.
0
0
0
0