EU Soft Launch EU Soft Launch Posted on Dec 12, 2019 12:00 -0400Posted on Dec 12, 2019 12:00 -0400
Hello everyone, sorry about the delay. We ran into a few network issues that took longer than expected to diagnose and confirm fixed. We’re to the point where we’d like have some organic load on our new EU location. Anyone feeling brave please give news-eu.usenetexpress.com a shot. We’ve done quite a bit of testing internally, but nothing beats a bunch of you banging on the servers.
Europe update We finally have progress!! Posted on Nov 27, 2019 12:00 -0400Posted on Nov 27, 2019 12:00 -0400
After what seems like forever (at least years), we’ve finally made significant progress on our European location. Over the last week we finished moving our transit/peering servers into their new location. Our upstream network capacity has also been more than doubled. In addition, we now have direct access to multiple internet exchanges (IX), providing low latency connections to a huge portion of the ‘net.
We’re currently finishing up our authentication and front-end (customer facing) infrastructure. The servers are on-site and racked, we’re just doing software related tasks. We expect that to be finished this week. We already have a small spool set located in Europe, roughly 10ms away from our new location. To start we’ll use this spool set for “local” retention. Anything not found there will be retrieved from our US location.
Next on the list, and already ordered, are spools to place in the same data center as our transit/front-ends/etc. We’re expecting these to be delivered the 2nd week of December, depending on how long customs takes etc. We found it less expensive to purchase HDD in the US, ship to EU, and pay VAT than to purchase from EU vendors. Mostly likely due to the relationships we have with US vendors that we’ve purchased a metric crap ton of HDD through.
We don’t have firm plans on how deep we’ll take the retention on the spool set in EU. It’s going to be a balancing act. We want it deep enough to serve the majority of requests, but for now we plan to allocate our resources on deeper retention instead of duplicate retention. Basically it doesn’t make sense to have twice the storage/cost to serve a few percent of requests when pulling an older article from our US location is reasonably quick (75ms). We’ve found that most latency issues can be overcome with more connections. We also have some interesting backend development going on using QUIC as a transport protocol between locations. Our initial testing has shown significant speed gains when using QUIC, which could reduce the issue of pulling from geographically diverse locations even further. As we develop and implement new technology, the “formula” for how deep we keep retention at each location will change. Our number one goal is to make a pleasant user experience that is well priced in the market.
Amazing block special $5 for 500G block, buy 3 get 1 free Posted on Oct 13, 2019 12:00 -0400Posted on Oct 13, 2019 12:00 -0400
Hello everyone. It has been awhile since we ran a block special and we’ve had numerous emails asking when the next one will be..
Over the next few days we’ll run a 500G block for $5. As a special bonus, if you purchase three blocks we’ll give you a fourth one free. Unfortunately our billing software isn’t flexible enough to do the forth block automatically, so a quick email to [email protected] is required to claim your free block.
Core router reboot Ack, core router update requires reboot Posted on Jun 28, 2019 12:00 -0400Posted on Jun 28, 2019 12:00 -0400
Hello everyone. We’ll be in the datacenter over the next few days doing a few upgrades. At the top of the list is a firmware (Junos) update to our core routers. Unfortunately the update requires a reboot which will cause service interruption. We’ll take advantage of the reboot time to also double the capacity (additional network connections) from our core routers to core switching. Our peering servers in Europe will backlog articles while the US routers are rebooting. We don’t expect any lost articles etc but all customer connections will be reset and unavailable while the routers reboot and BGP routing converges (~10min). We expect the router update to take place on Saturday (June 29th) in the afternoon (our slow period).
We’ll update once the reboot is complete.
Upgrades complete. Total downtime ended up being closer to 15min due to some unforeseen snags.