Every enterprise Azure deployment starts with a networking decision that is difficult to reverse: hub-and-spoke or Virtual WAN. Both architectures support the Cloud Adoption Framework's landing zone model. The difference is in control, complexity, cost, and scale. This post provides the technical depth, cost modeling, and decision framework required to make this choice with confidence based on dozens of enterprise landing zone deployments we have delivered.
The Challenge
Networking topology is the foundation layer of an Azure landing zone. It determines how workloads communicate, how on-premises connectivity is established, how traffic is inspected, how DNS resolves, and how the environment scales across regions. Changing the topology after workloads are deployed is possible but expensive, it requires re-peering VNets, reconfiguring route tables, migrating gateways, and coordinating application-level DNS changes. The cost of a wrong initial decision compounds with every workload deployed on top of it.
Organizations typically face this decision when they are adopting the Azure Cloud Adoption Framework (CAF) landing zone architecture for the first time, or when they are refactoring an organic Azure environment into a governed structure. In both cases, the networking topology choice is the first architectural decision and the one with the longest-lasting consequences.
The two options — hub-and-spoke (self-managed) and Azure Virtual WAN (Microsoft-managed) — are not interchangeable. They optimize for different priorities and operate at different cost points. The right choice depends on your scale, your team's networking expertise, your multi-region requirements, and your appetite for control versus convenience.
Architecture Deep Dive: Hub-and-Spoke
In a hub-and-spoke topology, you deploy a central hub Virtual Network (VNet) that hosts shared networking services. Workload VNets (spokes) establish VNet peering connections to the hub. All inter-spoke traffic and internet-bound traffic routes through the hub, where it is inspected by Azure Firewall or a third-party Network Virtual Appliance (NVA).
Hub VNet Component Breakdown
A properly designed hub VNet contains several dedicated subnets, each hosting a specific shared service. The subnet sizing and SKU selection decisions made here determine the capacity ceiling for the entire environment.
Hub VNet address space: We typically allocate a /22 for the hub VNet, which provides 1,024 addresses — sufficient for all required subnets with room for future services. Using a /22 avoids the need to expand the address space later, which requires re-peering all spoke VNets.
Spoke VNet Design Patterns
Each spoke VNet hosts a single workload or application environment. Spokes are isolated from each other by default — inter-spoke communication requires routing through the hub firewall, which provides centralized security inspection.
- Address space: /24 per spoke for small workloads, /22 for larger workloads requiring multiple subnets (app tier, data tier, integration tier)
- Peering configuration: Allow forwarded traffic enabled, use remote gateway enabled (to leverage the hub's VPN/ExpressRoute gateways)
- Route table: Each spoke requires a User-Defined Route (UDR) with a default route (0.0.0.0/0) pointing to the Azure Firewall's private IP. Without this, spoke traffic bypasses the firewall.
Routing in Hub-and-Spoke
Routing is the most operationally complex aspect of hub-and-spoke. Every traffic flow requires explicit route table configuration. There is no transitive routing by default — Spoke A cannot communicate with Spoke B through the hub unless UDRs and firewall rules are configured for both directions.
This is simultaneously the architecture's greatest strength and its greatest operational burden. You have complete control over every traffic flow. You also bear complete responsibility for every traffic flow.
Architecture Deep Dive: Azure Virtual WAN
Azure Virtual WAN (vWAN) is a Microsoft-managed networking service that replaces the self-managed hub with a managed hub infrastructure. You do not deploy a VNet for the hub — Microsoft provisions and manages the hub resources. You connect spoke VNets, branch offices (via VPN), and ExpressRoute circuits to the vWAN hub.
Virtual WAN Hub Component Breakdown
Key difference: Virtual WAN provides transitive routing natively. When Spoke A and Spoke B are both connected to the vWAN hub, they can communicate through the hub without any UDR configuration. When you enable Routing Intent, all traffic between spokes automatically routes through the firewall. This eliminates the single largest operational burden of hub-and-spoke.
Scale Unit Sizing Guidance
VPN Gateway scale units determine aggregate VPN throughput. For most enterprise deployments with 5-20 branch offices:
- 2 Scale Units (1 Gbps aggregate): Sufficient for most mid-market enterprises with standard branch connectivity needs
- 4+ Scale Units (2+ Gbps aggregate): Required for enterprises with high-bandwidth branch-to-cloud traffic or large numbers of concurrent VPN tunnels (50+)
ExpressRoute Gateway scale units are typically determined by your ExpressRoute circuit bandwidth. A 1 Gbps ExpressRoute circuit requires at minimum 1 Scale Unit (2 Gbps capacity) to handle burst traffic without packet loss.
DNS Architecture
DNS is the most underestimated component of both architectures. Hybrid DNS resolution — where Azure resources need to resolve on-premises DNS names and vice versa — requires deliberate design in either topology.
Azure DNS Private Resolver
Azure DNS Private Resolver replaces the legacy pattern of deploying Windows DNS servers or BIND forwarders inside the hub VNet. It provides Azure-native inbound and outbound DNS endpoints:
- Inbound endpoint: On-premises DNS servers forward queries for Azure Private DNS zones to this endpoint. This enables on-premises resources to resolve Azure Private Endpoint hostnames (e.g.,
storage-account.blob.core.windows.netresolving to a private IP). - Outbound endpoint: Azure resources use this endpoint to forward DNS queries to on-premises DNS servers for on-premises domain names. Configured via DNS Forwarding Rulesets.
DNS in Hub-and-Spoke
The DNS Private Resolver is deployed in a dedicated subnet within the hub VNet. Spoke VNets are configured to use the hub VNet's DNS settings (via VNet DNS server configuration pointing to the Private Resolver inbound IP). Conditional forwarding rules are managed centrally in the DNS Forwarding Ruleset.
DNS in Virtual WAN
Virtual WAN does not directly host Azure DNS Private Resolver inside the managed hub. The recommended pattern is to deploy a shared-services spoke VNet connected to the vWAN hub that hosts the DNS Private Resolver. Spoke VNets point their DNS configuration to the resolver's IP in the shared-services spoke. This adds one layer of indirection but functions equivalently.
Private DNS Zones (for Private Endpoints) should be linked to the hub VNet in hub-and-spoke, or to the shared-services spoke VNet in Virtual WAN. Centralized Private DNS Zone management is critical — distributed zone management across spoke VNets creates resolution failures that are extremely difficult to diagnose.
Firewall Policy Design
Azure Firewall, deployed in either architecture, uses Firewall Policy for rule management. Policy design affects manageability, performance, and compliance posture.
Rule Collection Group Hierarchy
Firewall Policy organizes rules into a three-level hierarchy. Understanding this hierarchy is essential for policy design that scales:
- Rule Collection Groups: The highest organizational unit. Groups are processed in priority order (lower number = higher priority). Use these to separate concerns: "Platform-Baseline" (priority 100), "Application-Teams" (priority 200), "Exception-Rules" (priority 300).
- Rule Collections: Within each group, rule collections define a set of rules with a shared action type. A rule collection is either Network, Application, or DNAT — it cannot mix types.
- Rules: Individual allow or deny rules within a collection.
Application Rules vs. Network Rules vs. DNAT Rules
Design principle: Use Application Rules for all HTTP/HTTPS traffic. Application rules provide FQDN-based filtering, which is more maintainable than IP-based rules for internet-bound traffic. Use Network Rules only for non-HTTP protocols where FQDN filtering is not available. This approach gives maximum visibility and the most granular control.
Security Considerations: Azure Firewall vs. Third-Party NVAs
A common question in landing zone design is whether to use Azure Firewall or a third-party Network Virtual Appliance (e.g., Palo Alto, Fortinet, Check Point). The decision criteria:
Our recommendation: Choose Azure Firewall Premium for greenfield Azure deployments unless the organization has deep existing investment in a third-party firewall platform and the team is already proficient in that vendor's policy model. The operational overhead reduction of a PaaS firewall is substantial over a 3-year period.
Cross-Region Connectivity
Multi-region architecture is where the two topologies diverge most sharply.
Hub-and-Spoke: Multi-Region
Extending hub-and-spoke to multiple regions requires deploying a full hub VNet in each region and establishing Global VNet Peering between the hub VNets. Each hub operates independently with its own firewall, gateway, and DNS resolver. Cross-region traffic flows through two firewalls (source hub and destination hub), which doubles the inspection cost and adds latency.
Challenges with multi-region hub-and-spoke:
- Global VNet Peering costs $0.01/GB for cross-region traffic (ingress and egress)
- Route tables must be manually synchronized across hubs
- Firewall policies can be shared via Firewall Manager, but rule deployments must be coordinated
- DNS forwarding rules must be replicated to each regional resolver
- Failure domain is per-region — a hub outage isolates all spokes in that region
Virtual WAN: Multi-Region
Virtual WAN handles multi-region connectivity natively. You deploy a vWAN hub in each region, and hub-to-hub routing is automatic. Spokes in Region A can communicate with spokes in Region B through the managed hub-to-hub backbone without any manual route configuration.
Advantages of Virtual WAN for multi-region:
- Hub-to-hub routing is automatic and transitive
- Microsoft's backbone network carries cross-region traffic (lower latency than internet-routed Global VNet Peering in many cases)
- Routing Intent applies globally — all traffic inspection policies propagate across hubs
- Branch-to-branch connectivity across regions is handled natively
- Operational complexity does not scale linearly with region count
Our recommendation: For single-region or two-region deployments, hub-and-spoke provides sufficient capability at lower cost. When an organization operates in three or more regions, or has a near-term roadmap to expand beyond two regions, Virtual WAN's automated cross-region routing becomes the decisive advantage. The operational cost of managing multi-region hub-and-spoke with manual route synchronization exceeds the platform cost premium of Virtual WAN.
Detailed Cost Comparison
Cost is often cited as the deciding factor, but simplistic comparisons are misleading. The true cost depends on the specific components deployed, the scale of the environment, and the number of regions. The following table provides a detailed line-item comparison.
Single-Region Cost Comparison
Two-Region Cost Comparison
Three-Region Cost Comparison
At three regions, Virtual WAN becomes cost-equivalent on infrastructure and substantially cheaper when engineering time is factored in. The crossover point varies by organization, but for most enterprises, it falls between two and three active regions.
Migration Path: Hub-and-Spoke to Virtual WAN
Organizations that start with hub-and-spoke may eventually need to migrate to Virtual WAN as they scale. This migration is possible but requires careful planning. The recommended approach:
- Deploy the Virtual WAN hub in parallel with the existing hub VNet. Both can coexist temporarily.
- Migrate spoke VNets one at a time. Disconnect a spoke from the hub VNet peering, then connect it to the vWAN hub. Test connectivity and DNS resolution before proceeding to the next spoke.
- Migrate gateway connectivity. Move VPN or ExpressRoute connections to the vWAN hub gateways. This typically requires a maintenance window for the cutover.
- Deploy Azure Firewall in the vWAN hub (Secured Virtual Hub) and enable Routing Intent. Migrate firewall policies from the hub VNet firewall to the vWAN firewall using Firewall Manager.
- Decommission the hub VNet once all spokes and gateways are migrated and validated.
Expected timeline: 4-8 weeks for a 10-20 spoke environment, depending on testing requirements and maintenance window availability. The critical risk is DNS — ensure Private DNS Zone links are updated before decommissioning the hub VNet.
Monitoring and Troubleshooting
The two architectures differ significantly in observability and troubleshooting approaches.
Hub-and-Spoke Monitoring
- Network Watcher: Full access to NSG flow logs, connection troubleshoot, packet capture, and next hop analysis
- Azure Firewall logs: Diagnostic settings send logs to Log Analytics — full visibility into every allowed and denied flow
- VNet peering status: Monitor peering state and data transfer metrics via Azure Monitor
- UDR verification: Effective routes on each NIC show the actual routing table applied — essential for diagnosing routing issues
Virtual WAN Monitoring
- vWAN Insights: Azure Monitor workbook provides topology visualization and hub health metrics
- BGP dashboard: View BGP route advertisements and learned routes across all hub connections
- Routing effectiveness: Effective routes show vWAN-managed routes per connection — less granular than per-NIC effective routes in hub-and-spoke
- Limitation: Packet capture and detailed next-hop analysis are not available inside the managed hub infrastructure. Troubleshooting routing issues in vWAN often requires Microsoft support engagement for hub-internal routing problems.
Practical implication: Hub-and-spoke provides deeper troubleshooting capability because you control the hub infrastructure. Virtual WAN provides better out-of-the-box monitoring dashboards but limits your ability to investigate issues inside the managed hub. For organizations with strong networking teams accustomed to deep packet-level troubleshooting, this reduced visibility in Virtual WAN can be frustrating.
Real-World Scale Limits
Both architectures have documented limits that should inform your decision:
The VPN tunnel limit is the most common scale constraint that pushes organizations toward Virtual WAN. Enterprises with 50+ branch offices requiring dedicated tunnels will exhaust hub-and-spoke VPN Gateway limits, while Virtual WAN supports 1,000 tunnels per scale unit.
Key Takeaways
- Hub-and-spoke is the right default for most enterprises. It covers 80% of requirements at lower cost with higher control. Choose it unless you have a specific, quantified reason to need Virtual WAN — typically three or more regions or 50+ VPN tunnels.
- The cost difference is modest in single-region deployments. Approximately $150-200/month. Do not let cost alone drive the decision — the operational model difference is far more significant.
- DNS architecture deserves dedicated design effort. In either topology, DNS misconfiguration is the number one cause of post-deployment connectivity issues. Plan inbound endpoints, outbound forwarding, and Private DNS Zone linking before deploying the first workload.
- Firewall policy design determines long-term manageability. Invest in a rule collection group hierarchy that separates platform baseline rules from application team rules. This structure scales. Flat rule lists do not.
- Plan for the transition path. If you start with hub-and-spoke, understand the migration path to Virtual WAN. Factor your 3-year growth trajectory — number of regions, number of spokes, number of branch connections — into the initial decision.
- Cross-region routing is the decisive differentiator. If your roadmap includes three or more active regions within 18 months, start with Virtual WAN. The operational cost of multi-region hub-and-spoke route synchronization exceeds the platform premium.
- Monitoring depth varies. Hub-and-spoke offers deeper troubleshooting access. Virtual WAN offers better topology dashboards. Choose based on your team's operational maturity and troubleshooting expectations.
Next Steps
Selecting the right networking topology is the first — and most consequential — decision in an Azure landing zone deployment. The decision affects cost, security posture, operational burden, and scalability for every workload deployed afterward. It is worth investing the time to model your specific requirements against both architectures before committing.
Our landing zone architecture reviews typically evaluate networking topology alongside identity, governance, security, and management design areas. The networking decision does not exist in isolation — it interacts with DNS strategy, identity integration, and security inspection requirements.
Contact Techrupt to schedule a landing zone architecture review for your organization.




.png)




.png)
