US (IAD) load balancing We're now using hardware based server load balancers in the US. Posted on Jun 22, 2020 12:00 -0400Posted on Jun 22, 2020 12:00 -0400
Since day one we’ve been using DNS to load balance customer connections to our farm. This has worked well and we’ve been happy with it over the years. As we’ve grown we’ve noted one downside and that’s the excessive use of IP addresses as we grow. This issue is compounded as we add resellers. Basically each front-end requires an IP per reseller. When you’re growing as fast as we are it adds up quickly. So…
Today we started moving over to a redundant hardware solution using DSR (direct server return). In addition to using a lot less IPv4 addresses, we also get a few more benefits from the new setup. We’ll be able to preform maintenance on front-end servers easier (without having to wait for DNS to update) and we expect the load to be more uniformly balanced over the front-ends. We don’t expect the change to be noticeable to end users but please report any issues you experience.
Cogent (one of our bandwidth providers) will be preforming network maintenance during the following window:
Start time: 12:00 am eastern 03/27/2020
End time: 3:00 am eastern 03/27/2020
Expected Outage/Downtime: 60-90 minutes
During this maintenance window, you will experience one or more brief interruptions in service while Cogent completes the maintenance activities; the interruptions are expected to last less than 60-90 minutes total.
Most of our traffic will fail over to one of our redundant connections, but with the high usage we’re currently seeing from COVID-19, we expect traffic to be slowed down a bit.
Leap Year Sale The sale that only comes every four years Posted on Feb 28, 2020 12:00 -0400Posted on Feb 28, 2020 12:00 -0400
We have two amazing sales starting tomorrow (2/29/2020)!
To celebrate leap year we’re offering four years of unlimited access for only $95. What a deal!
EU Soft Launch EU Soft Launch Posted on Dec 12, 2019 12:00 -0400Posted on Dec 12, 2019 12:00 -0400
Hello everyone, sorry about the delay. We ran into a few network issues that took longer than expected to diagnose and confirm fixed. We’re to the point where we’d like have some organic load on our new EU location. Anyone feeling brave please give news-eu.usenetexpress.com a shot. We’ve done quite a bit of testing internally, but nothing beats a bunch of you banging on the servers.
Europe update We finally have progress!! Posted on Nov 27, 2019 12:00 -0400Posted on Nov 27, 2019 12:00 -0400
After what seems like forever (at least years), we’ve finally made significant progress on our European location. Over the last week we finished moving our transit/peering servers into their new location. Our upstream network capacity has also been more than doubled. In addition, we now have direct access to multiple internet exchanges (IX), providing low latency connections to a huge portion of the ‘net.
We’re currently finishing up our authentication and front-end (customer facing) infrastructure. The servers are on-site and racked, we’re just doing software related tasks. We expect that to be finished this week. We already have a small spool set located in Europe, roughly 10ms away from our new location. To start we’ll use this spool set for “local” retention. Anything not found there will be retrieved from our US location.
Next on the list, and already ordered, are spools to place in the same data center as our transit/front-ends/etc. We’re expecting these to be delivered the 2nd week of December, depending on how long customs takes etc. We found it less expensive to purchase HDD in the US, ship to EU, and pay VAT than to purchase from EU vendors. Mostly likely due to the relationships we have with US vendors that we’ve purchased a metric crap ton of HDD through.
We don’t have firm plans on how deep we’ll take the retention on the spool set in EU. It’s going to be a balancing act. We want it deep enough to serve the majority of requests, but for now we plan to allocate our resources on deeper retention instead of duplicate retention. Basically it doesn’t make sense to have twice the storage/cost to serve a few percent of requests when pulling an older article from our US location is reasonably quick (75ms). We’ve found that most latency issues can be overcome with more connections. We also have some interesting backend development going on using QUIC as a transport protocol between locations. Our initial testing has shown significant speed gains when using QUIC, which could reduce the issue of pulling from geographically diverse locations even further. As we develop and implement new technology, the “formula” for how deep we keep retention at each location will change. Our number one goal is to make a pleasant user experience that is well priced in the market.