DGX Spark vs. Mac Studio: Price-Checked Look at NVIDIA's Personal AI Supercomp

Availability, real-world retail pricing across six countries, and comparison against Mac Studio.

Page content

NVIDIA DGX Spark is real, on sale Oct 15, 2025, and targeted at CUDA developers needing local LLM work with an integrated NVIDIA AI stack. US MSRP $3,999; UK/DE/JP retail is higher due to VAT and channel. AUD/KRW public sticker prices are not yet widely posted.

Against a Mac Studio with 128 GB and large SSD, Spark often costs similar or less than a tricked-out M4 Max and roughly similar to an entry M3 Ultrabut Mac Studio can go to 512 GB and >800 GB/s unified bandwidth, while Spark wins for CUDA/FP4 and 200 Gb/s two-box clustering.

DGX Spark vs. Mac Studio graphic

What is NVIDIA DGX Spark?

NVIDIA DGX Spark is a compact, desk-friendly AI workstation built around the Grace Blackwell GB10 Superchip (ARM CPU + Blackwell GPU on the same package via NVLink-C2C). NVIDIA positions it as a “personal AI supercomputer” for developers, researchers, and advanced students who want to prototype, fine-tune, and run inference on large models (up to ~200B parameters) locally, then hand off to data center or cloud.

This represents NVIDIA’s push to bring datacenter-grade AI capabilities to individual developers and small teams, democratizing access to powerful AI infrastructure that was previously only available in enterprise cloud environments or expensive on-premises servers. The form factor is deliberately designed to fit on a desk alongside standard development equipment, making it practical for office, home lab, or educational settings.

Core specifications

  • Compute: up to 1 PFLOP (FP4) AI performance; ~1000 TOPS class NPU/GPU metrics cited in materials. The Blackwell GPU architecture provides significant improvements in tensor core operations, particularly for the FP4 and INT4 quantized inference that’s become essential for running modern LLMs efficiently.
  • Memory: 128 GB unified LPDDR5x (soldered, non-upgradeable) with approximately 273 GB/s bandwidth. The unified memory architecture means both the Grace CPU and Blackwell GPU share the same memory pool, eliminating PCIe transfer bottlenecks when moving data between CPU and GPU. This is particularly beneficial for AI workloads that involve frequent host-device memory transfers.
  • Storage: 1–4 TB NVMe SSD (Founders Edition commonly listed with 4 TB). The NVMe storage is crucial for storing large model checkpoints, datasets, and intermediate training states. The 4 TB configuration provides ample space for multiple large model versions and training data.
  • I/O / Networking: 10 Gigabit Ethernet, Wi-Fi 7, HDMI 2.1, multiple USB-C with DisplayPort alt mode; many partner configs include ConnectX-7 (200 Gb/s) ports for clustering two units with RDMA (Remote Direct Memory Access) capabilities. The high-speed interconnect enables near-linear scaling when running distributed training or inference across two units.
  • Size / Power: ultra-small-form-factor (~150 × 150 × 50.5 mm, roughly 5.9 × 5.9 × 2.0 inches), external PSU; ~170 W typical power consumption under AI workloads. This is remarkably efficient compared to traditional AI workstations that often require 400-1000W power supplies and tower cases. The compact design means it can run from standard office power outlets without special electrical requirements.
  • Software: ships with DGX Base OS (Ubuntu-based) and the NVIDIA AI software stack including CUDA-X libraries, Triton Inference Server, RAPIDS for GPU-accelerated data science, PyTorch and TensorFlow optimized builds, NeMo framework for conversational AI, and access to NGC (NVIDIA GPU Cloud) container registry with pre-optimized models and containers. This provides turnkey GenAI workflows without spending weeks configuring dependencies and optimizing frameworks.

Architecture advantages

The Grace Blackwell GB10 Superchip represents a significant architectural innovation. By combining the ARM-based Grace CPU cores with Blackwell GPU compute units on a single package connected via NVLink-C2C (Chip-to-Chip interconnect), NVIDIA achieves dramatically lower latency and higher bandwidth for CPU-GPU communication compared to traditional PCIe-based systems. This tight integration is especially beneficial for:

  • Preprocessing and postprocessing stages in AI pipelines where CPU and GPU need to exchange data rapidly
  • Hybrid workloads that leverage both CPU and GPU compute simultaneously
  • Memory-intensive applications where the unified memory model eliminates costly data duplication between host and device
  • Real-time inference scenarios where low latency is critical

NVIDIA initially teased the device as Project “Digits” at earlier conferences; the production name is DGX Spark, continuing the DGX brand known from datacenter AI systems.


Availability & release timing

  • Release week: NVIDIA announced orders open Wednesday, October 15, 2025 via NVIDIA.com and authorized channel partners. This follows months of anticipation after the initial Project Digits announcement at GTC (GPU Technology Conference) earlier in 2025.
  • Global rollout: NVIDIA product pages and press materials mention worldwide partners including major OEMs: Acer, ASUS, Dell, HP, Lenovo, MSI, and Gigabyte launching compatible GB10-based mini workstations. Each partner may offer slightly different configurations, warranty terms, and support options.
  • Supply constraints: Early availability appears constrained, particularly outside the United States. Many retailers are showing “order on request,” “pre-order,” or “back-order” status rather than immediate in-stock availability. This is typical for cutting-edge hardware launches, especially with complex system-on-chip designs like the GB10.
  • Regional variations: While US customers can order directly from NVIDIA and major retailers, international customers may face longer wait times and should check with local authorized distributors for accurate delivery timelines. Some regions (notably Australia and South Korea) still don’t have public retail pricing posted.

Actual street prices we can verify

Below are current, public retail/pricelist entries we could find as of Oct 15, 2025 (AU/Melbourne), with approximate USD equivalents for context. Where a firm local price is not yet posted, we note the status.

How USD equivalents were estimated: We used late-Oct-2025 reference rates/historical snapshots (Exchange-Rates.org & ExchangeRatesUK); exact checkout totals vary by taxes/duties and card FX.

Country Price in local currency USD equivalent (approx.) Comment / Source
United States $3,999 $3,999 US press & NVIDIA launch materials list $3,999 for DGX Spark (final vs earlier $3,000 tease).
United Kingdom £3,699.97 inc VAT ≈$4,868 Novatech product page shows £3,699.97 inc VAT (Founders Edition code). USD ≈ £×1.316 using Oct-2025 ref.
Germany €3,689 ≈$4,264 heise reported “3689 € in Germany” for the 4 TB config. USD ≈ €×1.156 using Oct-2025 ref.
Japan ¥899,980 (Tsukumo) ≈$6,075 Tsukumo retail listing shows ¥899,980 (incl. tax). NTT-X shows ¥911,790; both “order on request.” USD ≈ ¥ / 148.14.
South Korea Price on request / pre-order NVIDIA KR marketplace lists Spark; local partners taking pre-orders, no public KRW sticker price yet.
Australia TBA NVIDIA AU product page is live, but no AUD ticketed price yet from major AU retailers at time of writing.

Notes: • UK retail entry (Novatech) and JP retailers (Tsukumo, NTT-X) are for the Founders Edition with 4 TB SSD. Availability may be order-upon-request or back-ordered. • Germany’s €3,689 comes from mainstream tech press price guidance; some B2B shops list Spark “price on request” pending stock.


Typical configurations (what you’ll actually see)

Understanding the different SKUs and configurations is important because memory is non-upgradeable and storage options vary significantly:

NVIDIA Founders Edition

This is the reference configuration sold directly by NVIDIA and serves as the baseline for most reviews and benchmarks:

  • Core specs: GB10 Superchip, 128 GB LPDDR5x unified memory, 4 TB NVMe SSD
  • Networking: Wi-Fi 7 (802.11be), 10 Gigabit Ethernet, ConnectX-7 SmartNIC with 200 Gb/s ports for dual-unit clustering
  • Display and peripherals: HDMI 2.1 (supports 4K @ 120Hz or 8K @ 60Hz), multiple USB-C ports with DisplayPort alt mode, USB-A ports
  • Dimensions: ~150 × 150 × 50.5 mm (5.9 × 5.9 × 2.0 inches)
  • Power: External power supply, ~170W typical consumption
  • Included software: DGX Base OS with full NVIDIA AI Enterprise software stack

The Founders Edition with ConnectX-7 is particularly appealing for researchers who might want to scale to a two-node cluster in the future without needing to replace hardware.

Partner OEM SKUs

System integrators and OEMs offer variations with different trade-offs:

  • Storage options: Some partners offer 1 TB, 2 TB, or 4 TB SSD configurations at different price points. If you’re primarily doing inference with downloaded models and don’t need to store multiple large checkpoints, a 1-2 TB option could save several hundred dollars.
  • Networking variations: Not all partner SKUs include the ConnectX-7 200 Gb/s adapter. Budget-oriented models may ship with only 10GbE and Wi-Fi 7. If you don’t plan to cluster two units, this can reduce costs.
  • Enclosure differences: Partners use their own industrial designs, which may affect cooling performance, noise levels, and aesthetics. Some may offer rack-mount options for lab environments.
  • Service and support: Dell, HP, and Lenovo typically provide enterprise-grade support options including on-site service, extended warranties, and integration with corporate IT management systems—valuable for business deployments.
  • Memory note: All configurations use the same 128 GB LPDDR5x soldered memory. This is not configurable across any SKU because it’s part of the GB10 Superchip package design.

When choosing a configuration, consider:

  • Do you need clustering? If yes, ensure the SKU includes ConnectX-7
  • How much local storage? Model weights, datasets, and checkpoints add up quickly
  • What support do you need? NVIDIA direct vs. enterprise OEM support with SLAs
  • What’s the total cost? Partner SKUs may bundle other software or services

DGX Spark vs. Mac Studio (similar-memory comparison)

What we match: DGX Spark Founders (GB10, 128 GB unified, up to 4 TB SSD) vs. Mac Studio configured to 128 GB unified (M4 Max) or higher-end M3 Ultra when considering maximum memory bandwidth/scale.

Price snapshot

  • DGX Spark (US): $3,999.
  • Mac Studio base pricing (US): M4 Max from $1,999, M3 Ultra from $3,999 (many users add memory/storage to reach 128 GB/4 TB).
  • Memory upgrades: Apple offers factory configs up to 128 GB (M4 Max) or 512 GB (M3 Ultra); AU store shows the step-up costs (indicative only for pricing deltas).

Takeaway: To match 128 GB/4 TB, a Mac Studio’s final price will usually land well above its $1,999 base, and can be comparable to or higher than Spark depending on chip (M4 Max vs M3 Ultra) and storage. Meanwhile Spark’s 4 TB/128 GB SKU is a single fixed bundle at $3,999.

Performance & architecture

AI compute capabilities

  • DGX Spark: Advertises up to 1 PFLOP (FP4) theoretical peak performance for AI workloads — a specification that reflects the tensor core capabilities of the Blackwell GPU when performing 4-bit floating point operations. This is particularly relevant for modern LLM inference which increasingly uses aggressive quantization (FP4, INT4, INT8) to fit larger models in available memory. The Blackwell architecture includes specialized tensor cores optimized for these lower-precision formats with minimal accuracy degradation.

  • Mac Studio: Apple doesn’t publish PFLOP ratings directly. Instead, they cite application-level benchmarks (video encoding, ML model training time, etc.) and Neural Engine TOPS ratings. The M4 Max offers 38 TOPS from its Neural Engine, while the M3 Ultra delivers 64 TOPS. However, these figures aren’t directly comparable to NVIDIA’s CUDA core specs because they measure different computational patterns and precision formats.

Practical implications: If your workload is CUDA-first (standard PyTorch, TensorFlow, JAX workflows), you’ll have mature tooling and extensive documentation with Spark. If you’re building around Apple’s MLX framework or Core ML, Mac Studio is the native choice. For standard open-source AI development, Spark offers broader ecosystem compatibility.

Unified memory capacity & bandwidth

  • DGX Spark: Fixed 128 GB LPDDR5x unified memory with approximately 273 GB/s bandwidth. This is shared between Grace CPU and Blackwell GPU without PCIe overhead. While 273 GB/s may seem modest compared to high-end GPUs, the unified architecture eliminates data copies between CPU and GPU memory spaces, which can be a hidden bottleneck in traditional systems.

  • Mac Studio: Configurable from 64 GB up to 128 GB (M4 Max) or 192-512 GB (M3 Ultra) with >800 GB/s unified memory bandwidth on Ultra-class variants. The M3 Ultra achieves over 800 GB/s through its ultra-wide memory interface. For workloads involving extremely large context windows (100K+ tokens), massive embedding tables, or simultaneous loading of multiple large models, Mac Studio’s higher memory ceiling provides critical headroom.

When memory capacity matters:

  • Running Llama 3 405B in higher precision formats benefits from 512 GB
  • Training large vision transformers with massive batch sizes
  • Multi-modal models that need to keep vision and language models resident simultaneously
  • Running multiple concurrent model serving instances

When 128 GB is sufficient:

  • Most quantized LLMs up to 200B parameters (e.g., quantized Llama 3 405B, Mixtral 8x22B)
  • Fine-tuning models in the 7B-70B range
  • Standard inference workloads with typical batch sizes
  • Research and prototyping with state-of-the-art models

Interconnect & clustering capabilities

  • DGX Spark: Partner SKUs commonly include ConnectX-7 SmartNIC (200 Gb/s) with RDMA support for direct two-node clustering. This enables distributed training and inference across two units with near-linear scaling for many workloads. NVIDIA’s NCCL (NVIDIA Collective Communications Library) is highly optimized for multi-GPU communication over these high-speed links. Two DGX Spark units can function as a 256 GB unified cluster for training workloads that benefit from data parallelism or model parallelism.

  • Mac Studio: Maxes out at 10 Gigabit Ethernet (or 10 GbE via Thunderbolt networking). While you can technically cluster Mac Studios over the network, there’s no native high-bandwidth, low-latency interconnect like NVLink or InfiniBand. macOS also lacks the mature distributed training frameworks that CUDA developers rely on.

Clustering use cases for Spark:

  • Distributed fine-tuning of models that don’t fit in 128 GB
  • Pipeline parallelism for very large models
  • Data parallel training with larger effective batch sizes
  • Research on distributed AI algorithms
  • Increased inference throughput by load-balancing across units

Ecosystem & tooling

  • DGX Spark ecosystem:

    • CUDA-X libraries: Comprehensive suite including cuDNN (deep learning), cuBLAS (linear algebra), TensorRT (inference optimization)
    • NVIDIA AI Enterprise: Commercial software suite with enterprise support, security updates, and stability guarantees
    • NGC (NVIDIA GPU Cloud): Pre-configured containers for popular frameworks, verified to work together without dependency conflicts
    • Framework support: First-class support for PyTorch, TensorFlow, JAX, MXNet with NVIDIA optimizations
    • Development tools: NVIDIA Nsight for profiling, CUDA-GDB for debugging, extensive sampling and tracing tools
    • Community: Massive CUDA developer community, extensive StackOverflow coverage, countless tutorials and examples
  • Mac Studio ecosystem:

    • Metal/Core ML: Apple’s native GPU compute and ML frameworks, highly optimized for Apple Silicon
    • MLX: Apple’s new NumPy-like framework for ML on Apple Silicon, gaining traction
    • Unified tools: Excellent integration with Xcode, Instruments profiling, and macOS development stack
    • Media engines: Dedicated video encoding/decoding blocks that dramatically accelerate content creation workflows
    • Creative apps: Final Cut Pro, Logic Pro, and Adobe Creative Suite optimized for Apple Silicon
    • Stability: Highly polished, stable environment ideal for production deployments

Bottom line decision matrix:

Choose DGX Spark if you:

  • Work primarily with CUDA-based workflows (standard PyTorch, TensorFlow)
  • Need FP4/INT4 quantization acceleration for efficient LLM inference
  • Want the option for two-node clustering at 200 Gb/s for future scalability
  • Require the full NVIDIA AI software stack with enterprise support
  • Need Linux-native development environment
  • Work with models in the 7B-200B parameter range with quantization
  • Value ecosystem compatibility with most open-source AI research code

Choose Mac Studio if you:

  • Need more than 128 GB memory (up to 512 GB on M3 Ultra)
  • Require maximum memory bandwidth (>800 GB/s)
  • Work in the macOS/iOS ecosystem and need development/deployment consistency
  • Use Core ML, Metal, or MLX frameworks
  • Have hybrid AI + creative workloads (video editing, 3D rendering, audio production)
  • Prefer the macOS user experience and integration with Apple services
  • Need a quiet, reliable workstation with excellent power efficiency
  • Don’t require CUDA specifically and can work with alternative frameworks

Practical use cases and workflows

Understanding who should buy DGX Spark requires looking at real-world scenarios where its unique combination of features provides value:

AI research and prototyping

Scenario: Academic researchers and graduate students working on novel LLM architectures, fine-tuning techniques, or multi-modal models.

Why Spark fits: The 128 GB unified memory handles most research-scale models (7B-70B base models, quantized 200B+ models). The NVIDIA AI stack includes all standard research tools. Two-unit clustering capability allows scaling experiments without migrating to cloud. The compact size fits in lab spaces where rack servers won’t fit.

Example workflows:

  • Fine-tuning Llama 3 70B on custom datasets
  • Experimenting with LoRA/QLoRA techniques
  • Testing prompt engineering strategies locally before cloud deployment
  • Developing custom CUDA kernels for novel attention mechanisms

Enterprise AI application development

Scenario: Startups and enterprise teams building AI-powered applications that need on-premises development/testing before cloud deployment.

Why Spark fits: Matches production environment specs (CUDA stack, Linux, containerized workflows). NGC containers provide production-grade, validated software. Teams can develop and test locally without cloud costs during active development. Once validated, workloads deploy to DGX Cloud or on-prem DGX systems with minimal changes.

Example workflows:

  • Building RAG (Retrieval Augmented Generation) systems
  • Custom chatbot/agent development with company-specific models
  • Local testing of model serving infrastructure
  • Training small-to-medium models on proprietary data

Educational institutions

Scenario: Universities and training programs teaching AI/ML courses need equipment that provides professional-grade experience without datacenter complexity.

Why Spark fits: Provides “datacenter in a box” experience. Students learn on the same NVIDIA stack they’ll use professionally. Compact form factor works in classroom/lab settings. Can support multiple student projects simultaneously via containerization.

Example workflows:

  • Teaching distributed deep learning courses
  • Student projects in NLP, computer vision, reinforcement learning
  • ML engineering bootcamps and certification programs
  • Research internship programs

Independent AI developers and consultants

Scenario: Solo practitioners and small consultancies who need flexible, powerful AI infrastructure but can’t justify cloud costs for continuous development.

Why Spark fits: One-time capital expenditure vs ongoing cloud bills. Full control over data and models (important for client confidentiality). Can run 24/7 training/inference jobs without accumulating charges. Portable—bring to client sites if needed.

Example workflows:

  • Client-specific model fine-tuning
  • Running private inference services
  • Experimentation with open-source models
  • Building AI products and demos

What DGX Spark is NOT ideal for

To set realistic expectations, here are scenarios where other solutions are better:

  • Production inference at scale: Cloud services or dedicated inference servers (like NVIDIA L4/L40S) are more cost-effective for high-volume serving
  • Very large model training: Models requiring >256 GB (even with two-unit clustering) need DGX H100/B100 systems or cloud
  • Massive batch jobs: If you need 8+ GPUs in parallel, look at traditional workstation/server builds
  • Windows-primary workflows: DGX Base OS is Ubuntu-based; Windows support is not a focus
  • Cost-optimized solutions: If budget is the primary constraint, used datacenter GPUs or cloud spot instances may be more economical
  • Creative-first workloads: If AI is secondary to video editing, music production, or graphic design, Mac Studio is likely better

Quick FAQ

When can I buy it? Orders opened October 15, 2025 via NVIDIA.com and partners. Early supply is constrained; expect order-on-request status at many retailers.

Is $3,999 the price everywhere? No. US MSRP is $3,999, but international prices are higher due to VAT and local factors: £3,700 (UK), €3,689 (DE), ¥899,980 (JP). Australia and South Korea pricing not yet widely posted.

Can I upgrade the RAM? No. The 128 GB LPDDR5x is soldered as part of the GB10 Superchip package. Storage varies by SKU (1-4 TB) but must be chosen at purchase.

Who is this for? AI researchers, developers, and advanced students working with LLMs locally. Best suited for those who need CUDA, want to prototype before cloud deployment, or require on-premises AI development.

For more detailed answers, see the comprehensive FAQ section in the frontmatter above.


Technical considerations for deployment

If you’re planning to deploy DGX Spark in your environment, here are practical technical considerations based on the specifications:

Power and infrastructure requirements

  • Power consumption: ~170W typical during AI workloads, external power supply included
  • Electrical: Standard office power (110-240V) is sufficient—no special high-amperage circuits needed
  • UPS recommendation: A 500-1000VA UPS can provide backup power for graceful shutdown during outages
  • Power compared to alternatives: Dramatically lower than traditional AI workstations (350-1000W) or multi-GPU servers

Cooling and acoustics

  • Thermal design: Compact form factor with active cooling; NVIDIA hasn’t published detailed noise specs
  • Ventilation: Ensure adequate airflow around the unit; don’t place in enclosed cabinets without ventilation
  • Ambient temperature: Standard office environment (18-27°C / 64-80°F recommended)
  • Noise expectations: Will be audible under load (like any high-performance compute device), but likely quieter than tower workstations with multiple GPUs

Networking setup considerations

  • 10 GbE: If using the 10 Gigabit Ethernet, ensure your switch supports 10GbE and use appropriate Cat6a/Cat7 cables
  • Wi-Fi 7: Requires Wi-Fi 7 capable router/access point for full performance; backward compatible with Wi-Fi 6/6E
  • Clustering (ConnectX-7): For two-unit clustering, you’ll need either:
    • Direct connection with compatible cables (DAC or fiber)
    • 200GbE-capable switch (enterprise-grade, significant investment)
    • Consult NVIDIA documentation for specific validated configurations

Storage management

  • NVMe SSD: High-performance storage included, but consider backup strategy
  • External storage: USB-C and network storage for datasets, model checkpoints, and backups
  • Storage planning: Model checkpoints can be 100+ GB each; plan capacity accordingly
    • 1 TB: Suitable for inference-focused workflows with occasional fine-tuning
    • 2 TB: Balanced for most researchers doing regular fine-tuning
    • 4 TB: Best for those maintaining multiple model versions, large datasets, or training from scratch

Software and container strategy

  • DGX Base OS: Ubuntu-based; comes with NVIDIA drivers and CUDA toolkit pre-installed
  • Container workflows: Recommended approach for most users:
    • Pull verified containers from NGC for specific frameworks
    • Develop inside containers for reproducibility
    • Version control your Dockerfiles and requirements
  • Security updates: Plan for regular OS and software stack updates; NVIDIA provides update channels
  • Monitoring: Set up GPU monitoring (nvidia-smi, DCGM) for utilization tracking and thermal monitoring

Integration with existing infrastructure

  • Authentication: Consider integrating with existing LDAP/Active Directory for enterprise deployments
  • Shared storage: Mount network file systems (NFS, CIFS) for shared datasets across team
  • Remote access: SSH for terminal access; consider setting up JupyterHub or VS Code Server for remote development
  • VPN: If accessing remotely, ensure proper VPN setup for security

Budget considerations beyond hardware

When calculating total cost of ownership, factor in:

  • Software licenses: Some commercial AI frameworks require licenses (though open-source options are plentiful)
  • Cloud costs during development: You may still use cloud for final training runs or deployment
  • Additional storage: External NAS or backup solutions
  • Network upgrades: 10GbE switch if your current infrastructure doesn’t support it
  • Training/learning time: If your team is new to NVIDIA AI stack, budget time for learning curve
  • Support contracts: Consider NVIDIA enterprise support if you’re deploying mission-critical applications

Comparison with building your own workstation

DGX Spark advantages:

  • Integrated, validated hardware and software stack
  • Compact, power-efficient design
  • Enterprise support options
  • Known performance characteristics
  • Turnkey experience

Custom workstation advantages:

  • Potentially lower cost for similar GPU performance (using discrete GPUs)
  • Upgradeable components
  • Flexible configuration (can add more RAM, storage, GPUs later)
  • Windows compatibility if needed

The trade-off: DGX Spark sacrifices upgradeability and flexibility for integration, efficiency, and the complete NVIDIA AI software ecosystem. Choose based on whether you value turnkey convenience or maximum customization.


Sources & further reading

  • NVIDIA DGX Spark product & marketplace pages (specs, positioning): NVIDIA.com (global/DE/AU/KR).
  • Launch timing & US pricing: NVIDIA press (Oct 13, 2025); The Verge coverage (Oct 13, 2025).
  • Country pricing examples: Novatech UK (£3,699.97); heise DE (€3,689); Tsukumo JP (¥899,980); NTT-X JP (¥911,790).
  • Partner ecosystem / two-unit stacking & specs dets: heise & ComputerBase coverage.
  • Mac Studio pricing/specs: Apple pages (specs/options/pricing regions) and launch coverage.
  • FX references for USD equivalents: Exchange-Rates.org / ExchangeRatesUK (Oct-2025 snapshots).