NGINX Conf 2018, Day 2: How NGINX Is Making Huge Strides Against the Backdrop of Rapid Digital Transformation

Original: https://www.nginx.com/blog/nginx-conf-2018-day-2/

While Day 1 of NGINX Conf 2018 was more about the big picture, on Day 2 we drilled down with several talks that positioned NGINX products against the backdrop of the today’s rapidly changing technology landscape. We heard fascinating keynotes from NGINX technologists, Margaret Dawson of Red Hat, and developer evangelist Steven Cooper that sent people away buzzing.

For anyone who didn’t have a chance to either attend or tune in, I’d like to share my six takeaways from Day 2.

Takeaway #1: We’ve Come a Long Way Toward Making NGINX the Engine of the Web

We recognize that when companies choose to implement a technology in their production environments, they’re taking a gamble with a significant investment of time and money. So there was a certain amount of pride in watching Owen Garrett, Sr. Director of Product Management at NGINX, run through customer research that showed how choosing NGINX technology has paid off for so many customers.

One thing Owen pointed out is that many in the NGINX community have been using NGINX Open Source for as long as 14 years (and NGINX Plus for the 5 years since its launch in 2013). It’s been amazing to have that sort of long‑term relationship with the community and our customers. Igor Sysoev named his open source project NGINX because he hoped it would become the engine of the web. And we can see now that NGINX Open Source is chosen by the busiest, biggest, and most business‑critical websites in the world: 65% of the top 10,000 sites, 58% of the top 100,000, and more of the top million sites than any other server. We’re deployed on about 300 million sites and 2 million public IP addresses.

Not only has usage scaled, but we’ve been able to keep our customers happy, with award‑winning customer service and an NPS score (a measure of customer satisfaction) of 62, well above those of Amazon, Google, and Microsoft.

Takeaway #2: Enhancements for NGINX Plus in Clusters Can Help Defeat Bad Bots

Liam Crilly, Director of Product Management at NGINX, outlined recent enhancements to NGINX Plus focused on helping customers with clustered deployments. The rapid adoption of containerized environments and orchestration platforms, as well as cloud technology, has driven a shift from active‑passive deployments to scaled‑out clusters of multiple NGINX Plus instances.

Liam described how NGINX Plus R16’s new state‑sharing capabilities cover an expanded set of use cases, and in particular can thwart “bad actor” bots. Such bots – which now account for a shocking 21% of Internet traffic – can hammer a website with large numbers of requests, consuming resources and bandwidth to such a degree that the experience of legitimate human visitors to the site is degraded. Rejecting bot‑generated requests outright is not the best solution, however, because in response the bot can switch to another IP address and attack you (or another site) from there. Instead, impose a very low bandwidth limit on responses to the client – say, 10 Kbps – turning the ‘slow HTTP attack’ technique on its head and effectively neutralizing the bad actor.

Liam illustrated this most compellingly in a live demo. A cluster of NGINX Plus instances, distributed across a data center and several clouds, shared information about the bad actors in real time. He first set a cluster‑wide limit on the rate of requests from each client IP address. When requests from a client exceeded the limit, NGINX Plus created an entry for its IP address in a cluster‑wide key‑value store called a “sin bin”. Responses to clients in the sin bin were then bandwidth limited. Because the sin bin is shared across all of the NGINX Plus instances, bad actors can’t avoid the bandwidth limit by switching their attention to an instance in a different data center or cloud.

Takeaway #3: Customer Input Is Driving the Direction of NGINX Controller

Jason Feldt, Director of Product Management at NGINX, expanded on the idea that modern applications need to be like living organisms that can adapt efficiently to changes in their environment. For applications delivered with the NGINX Application Platform, NGINX Controller is the eyes and ears that gather information about your applications and applications, and the brain that helps you implement improvements to make your application delivery infrastructure more agile, resilient, and efficient.

Jason reviewed the results from our recent survey, which confirmed that our customers are at different points in their modernization journey and so need a diverse range of capabilities from NGINX Controller. Some want to be able to make configuration changes once and push them out everywhere, some need point-and-click configuration and management tools so less tech‑savvy colleagues can start using NGINX Plus, and some most want help monitoring and analyzing their app delivery environment.

Jason reported that NGINX Controller is well on its way to satisfying all of these asks. In a live demo, he explored NGINX Controller’s sophisticated monitoring, alerting, and configuration analysis tools. He showed how, with the point-and-click set‑up wizard, in just five steps you can create a new NGINX Plus load balancer configuration with defaults that implement NGINX’s best‑practice recommendations. You can then customize the configuration if you wish, and push it to a configurable set of NGINX Plus hosts with one click.

Takeaway #4: NGINX Unit Is Expanding NGINX Technology into the Application Space

Nick Shadrin, Sr. Product Manager at NGINX, highlighted how NGINX technology now covers all three basic functional spaces at a website. NGINX Open Source and NGINX Plus are industry‑leading solutions in the web‑serving and traffic‑routing spaces. NGINX Unit continues to expand its presence in the application space, thanks to community members like Timo Stark of Audi who have used NGINX Unit since it was introduced.

Timo joined Nick onstage to describe how NGINX Unit has simplified configuration and management of his PHP and Go applications in production. NGINX Unit is so lightweight that the containers for his applications are only one‑third as large as before. Timo is particularly looking forward to taking advantage of a feature just released in NGINX Unit 1.4, dynamic update of TLS certificates with no configuration reload or process restart required. Up to now updating certificates for his Go applications has required an application restart with the inherent risk of service disruption.

NIck was also joined by NGINX co‑founder and CTO Igor Sysoev, who is leading development of both NGINX Unit and njs (the NGINX JavaScript module). Igor pointed out that the two technologies have complementary roles in the application space: njs is good for small‑scale customizations of NGINX traffic handling (such as authentication), while more complex functionality belongs in a separate application served by NGINX Unit.

Takeaway #5: Digital Leaders Are Willing To Embrace Change, Quickly

Margaret Dawson, VP of Products and Technologies at Red Hat, examined characteristics shared by all leaders in digital transformation. One important one is a willingness to embrace change at a fast pace, in every arena of the organization: technology, process, and culture. Leaders also have a specific vision of the future consistent with realistic business outcomes, and have convinced others throughout the organization to share that vision.

These qualities are actually quite rare, but when you have them you can achieve astounding things. Margaret quoted results from a study that found high‑performing groups deploy code changes 46 times more frequently than low‑performing ones.

Digital leaders also use data to adapt, deciding on the true best course of action by taking input from a range of sources. They also leverage open source technology – like NGINX does – and so benefit from the inputs and innovations of a community of developers. Perhaps the most difficult change to make in becoming a digital leader is to your organizational culture. You need an “open source” work culture that encourages collaboration and transparency, and replaces hierarchical structure with an emphasis on accountability.

Takeaway #6: In the Internet of Things, No One Knows You Are a Fridge

Steven Cooper, a developer evangelist, inspiringly laid out the endless possibilities for digital innovation that the Internet of Things has opened up, as people look to build a seamless digital experience into the world around them.

There are currently 23 billion sensors in the world, according to data Steven referenced, but there’s still so much more to connect: just 7% of cars have a sensor in them, and only 14% of homes and 26% of cities employ some form of smart technology. The major challenge is connectivity, he said. We’ve seen a rise of extended close networks and edge computing to meet these needs but there’s a long way to go. A challenge is that the devices themselves must be extremely power efficient, in some cases relying on a small, self‑contained power source for their entire lifetime. That makes efficient and scalable network stacks on the device and the server critically important.

Retrieved by Nick Shadrin from nginx.com website.