Docker https://www.docker.com Fri, 22 Dec 2023 19:37:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 https://www.docker.com/wp-content/uploads/2023/04/cropped-Docker-favicon-32x32.png Docker https://www.docker.com 32 32 How to Use OpenPubkey with GitHub Actions Workloads https://www.docker.com/blog/how-to-use-openpubkey-with-github-actions-workloads/ Thu, 21 Dec 2023 15:33:12 +0000 https://www.docker.com/?p=49923 This post was contributed by Ethan Heilman, CTO at BastionZero.

OpenPubkey is the web’s new technology for adding public keys to standard single sign-on (SSO) interactions with identity providers that speak OpenID Connect (OIDC). OpenPubkey works by essentially turning an identity provider into a certificate authority (CA), which is a trusted entity that issues certificates that cryptographically bind an identity with a cryptographic public key. With OpenPubkey, any OIDC-speaking identity provider can bind public keys to identities today.

OpenPubkey is newly open-sourced through a collaboration of BastionZero, Docker, and the Linux Foundation. We’d love for you to try it out, contribute, and build your own use cases on it. You can check out the OpenPubkey repository on GitHub.

In this article, our goal is to show you how to use OpenPubkey to bind public keys to workload identities. We’ll concentrate on GitHub Actions workloads, because this is what is currently supported by the OpenPubkey open source project. We’ll also briefly cover how Docker is using OpenPubkey with GitHub Actions to sign in-toto attestations on Docker Official Images and improve supply chain security. 

Dark blue text on light blue background reading OpenPubkey with Docker logo

What’s an ID token?

Before we start, let’s review the OpenID Connect protocol. Identity providers that speak OIDC are usually called OpenID Providers, but we will just call them OPs in this article. 

OIDC has an important artifact called an ID token. A user obtains an ID token after they complete their single sign-on to their OP. They can then present the ID token to a third-party service to prove that they have properly been authenticated by their OP.  

The ID token includes the user’s identity (such as their email address) and is cryptographically signed by the OP. The third-party service can validate the ID token by querying the OP’s JSON Web Key Set (JWKS) endpoint, obtaining the OP’s public key, and then using the OP’s public key to validate the signature on the ID token. The OP’s public key is available by querying a JWKS endpoint hosted by the OP. 

How do GitHub Actions obtain ID tokens? 

So far, we’ve been talking about human identities (such as email addresses) and how they are used with ID tokens. But, our focus in this article is on workload identities. It turns out that Actions has a nice way to assign ID tokens to GitHub Actions.   

Here’s how it works. GitHub runs an OpenID Provider. When a new GitHub Action is spun up, GitHub first assigns it a fresh API key and secret. The GitHub Action can then use its API key and secret to authenticate to GitHub’s OP. GitHub’s OP can validate this API key and secret (because it knows that it was assigned to the new GitHub Action) and then provide the GitHub Action with an OIDC ID token. This GitHub Action can now use this ID token to identify itself to third-party services.

When interacting with GitHub’s OP,  Docker uses the job_workflow_ref claim in the ID token as the workflow’s “identity.” This claim identifies the location of the file that the GitHub Action is built from, so it allows the verifier to identify the file that generated the workflow and thus also understand and check the validity of the workflow itself. Here’s an example of how the claim could be set:

job_workflow_ref = octo-org/octo-automation/.github/workflows/oidc.yml@refs/heads/main

Other claims in the ID tokens issued by GitHub’s OP can be useful in other use cases. For example, there is a field called Actor or ActorID, which is the identity of the person who kicked off the GitHub Action. This could be useful for checking that workload was kicked off by a specific person. (It’s less useful when the workload was started by an automated process.)

GitHub’s OP supports many other useful fields in the ID token. You can learn more about them in the GitHub OIDC documentation.

Creating a PK token for workloads

Now that we’ve seen how to identify workloads using GitHub’s OP, we will see how to bind that workload identity to its public key with OpenPubkey. OpenPubKey does this with a cryptographic object called the PK token. 

To understand how this process works, let’s go back and look at how GitHub’s OP implements the OIDC protocol. The ID tokens generated by GitHub’s OP have a field called audience. Importantly, unlike user identity in workload identity, the audience field is chosen by the OIDC client that requests the ID token. When GitHub’s OP creates the ID token, it includes the audience along with the other fields (like job_workflow_ref and actor) that the OP signs when it creates the ID token.

So, in OpenPubkey, the GitHub Action workload runs an OpenPubkey client that first generates a new public-private key pair. Then, when the workload authenticates to GitHub’s OP with OIDC, it sets the audience field equal to the cryptographic hash of the workload’s public key along with some random noise. 

Now, the ID token contains the GitHub OP’s signature on the workload’s identity (the job_workflow_ref field and other relevant fields) and on the hash of the workload’s public key. This is most of what we need to have GitHub’s OP bind the workload’s identity and public key.

In fact, the PK token is a JSON Web Signature (JWS) which roughly consists of:

  • The ID token, including the audience field, which contains a hash of the workload’s public key.
  • The workload’s public key.
  • The random noise used to compute the hash of the workload’s public key.
  • A signature, under the workload’s public key, of all the information in the PK token. (This signature acts as a cryptographic proof that the user has access to the user-held secret signing key that is certified in the PK token.)

The PK token can then be presented to any OpenPubkey verifier, which uses OIDC to obtain the GitHub OP’s public key from its JWKS end. The verifier then verifies the ID token using the GitHub OP public key and then verifies the rest of the other fields in the PK token using the workload’s public key. Now the verifier knows the public key of the workload (as identified by its job_workflow_ref or other fields in the ID token) and can use this public key for whatever cryptography it wants to do.

Can you use ephemeral keys with OpenPubkey?

Yes! An ephemeral key is a key that is only used for a short period of time. Ephemeral keys are nice because there is no need for long-term management of the private key; it can be deleted when it is no longer needed, which improves security and reduces operational overhead.

Here’s how to do this with OpenPubkey. You choose a public-private key pair, authenticate to the OP to obtain a PK token for the public key, sign your object using the private key, and finally throw away the private key.   

One-time-use PK token 

We can take this a step further and ensure the PK token may only be associated with a single signed object. Here’s how it works. To start, we take a hash of the object to be signed. Then, when the workload authenticates to GitHub’s OP,  set the audience claim to equal to the cryptographic hash of the following items:

  • The public key 
  • The hash of the object to be signed
  • Some random noise

Finally, OpenPubkey verifier obtains the signed object and its one-time-use PK token, and then validates the PK token by additionally checking that the hash of the signed object is included in the audience claim. Now, you have a one-time-use PK token. You can learn more about this feature of OpenPubkey in the repo.

How will Docker use OpenPubkey to sign Docker Official Images?

Docker will be using OpenPubkey with GitHub Actions workloads to sign in-toto attestations on Docker Official Images. Docker Official Images will be created using a GitHub Action workload. The workload creates a fresh ephemeral public-private key pair, obtains the PK token for the public key via OpenPubkey, and finally signs attestations on the image using the private key.  

The private key is then deleted, and the image, its signature, and the PK token will be made available on the Docker Hub container registry. This approach is nice because it doesn’t require the signer to maintain or store the private key.

Docker’s container signing use case also relies heavily on The Update Framework (TUF), another Linux Foundation open source project. Read “Signing Docker Official Images Using OpenPubkey” for more details on how it all works.

What else can you do with OpenPubkey and GitHub Actions workloads?

Check out the following ideas on how to put OpenPubkey and GitHub Actions to work for you.

Signing private artifacts with a one-time key 

Consider signing artifacts that will be stored in a private repository. You can use OpenPubkey if you want to have a GitHub Action cryptographically sign an artifact using a one-time-use key. A nice thing about this approach is that it doesn’t require you to expose information in a public repository or transparency log. Instead, you need to post the artifact, its signature, and its PK token in the private repository. This capability is useful for private code repositories or internal build systems where you don’t want to reveal to the world what is being built, by whom, when, or how frequently. 

If relevant, you could also consider using the actor and actor-ID claim to bind the human who builds a particular artifact to the signed artifact itself. 

Authenticating workload-to-workload communication

Suppose you want one workload (call it Bob) to process an artifact created by another workload (call it Alice). If the Alice workload is a GitHub Action, the artifact it creates could be signed using OpenPubkey and passed on to the Bob workload, which uses an OpenPubkey verifier to verify it using the GitHub OP’s public key (which it would obtain from the GitHub OP’s JWKS url).  This approach might be useful in a multi-stage CI/CD process.

And other things, too! 

These are just strawman ideas. The whole point of this post is for you to try out OpenPubkey, contribute, and build your own use cases on it.

Other technical issues we need to think about

Before we wrap up, we need to discuss a few technical questions.

Aren’t ID tokens supposed to remain private?

You might worry about applications of OpenPubkey where the ID token is broadly exposed to the public inside the PK token. For example, in Docker Official Image signing use case, the PK tokens are made available to the public in the Docker Hub container registry. If the ID token is broadly exposed to the public, there is a risk that the ID token could be replayed and used for unauthorized access to other services.  

For this reason, we have a slightly different PK token for applications where the PK token is made broadly available to the public. 

For those applications, OpenPubkey strips the OP’s signature from the ID token before including the PK token. The OP’s signature is replaced with a Guillou-Quisquater (GQ) non-interactive proof-of-knowledge for an RSA signature (which is also known as a “GQ signature”). Now, the ID Token cannot be replayed against other services because the OP’s signature is removed, but the security of the OP’s RSA signature is maintained by the GQ Signature.   

So, in applications where the PK token must be broadly exposed to the public, the PK token is a JSON Web Signature, which consists of:

  • The ID token excluding the OP’s signature
  • A GQ signature on the ID token
  • The user’s public key
  • The random noise used to compute the hash of the user’s public key
  • A signature, under the user’s public key, of all the information in the PK token 

The GQ signature allows the client to prove that the ID token was validly signed by the OP, without revealing the OP’s signature. The OpenPubkey client generates the GQ signature to cryptographically prove that the client knows the OP’s signature on the ID token, while still keeping the OP’s signature secret. GQ signatures only work with RSA, but this is fine because every OpenID Connect provider is required to support RSA.

Because GQ signatures are larger and slower than regular signatures, we recommend using them only for use cases where the PK token must be made broadly available to the public. BastionZero’s infrastructure access use case does not use GQ signatures because it does not require the PK token to be made public. Instead, the user only exposes their PK token to the target (e.g., server, container, cluster, database) that they want to access; this is the same way an ID token is usually exposed with OpenID Connect.   

GQ signatures might not be necessary when authenticating workload-to-workload communications; if the Alice workload is passing the signed artifact and its PK token to the Bob workload only, there is less of a concern as the PK token is broadly available to the public.

What happens when the OP rotates its OpenID Connect key? 

OPs have OpenID Connect signing keys that change over time (e.g., every two weeks). What happens if we need to use a PK token after the OP rotates OpenID Connect key that signed the PK Token? 

For some use cases, the lifetime of a PK token is typically short. With BastionZero’s infrastructure access use case, for instance, a PK token will not be used for longer than 24 hours. In this use case, these timing problems are solved by (1) having user re-authenticate to the IdP and create a new PK token whenever the IdP rotates it’s key, and (2) having the OpenPubkey verifier check that the client also has a valid OIDC Refresh token along with the PK token whenever the ID token expires.

For some use cases, the PK token has a long life, so we do need to worry about OP rotating their OpenID Connect keys. With Docker’s attestation signing use case, this problem is solved by having TUF additionally store a historical log of the OP’s signing key. Anyone can keep a historical log of the OP public keys for use after they expire. In fact, we envision a future where OP’s might keep this historical log themselves. 

That’s it for now! You can check out the OpenPubkey repo on GitHub. We’d love for you to join the project, contribute, and identify other use cases where OpenPubkey might be useful.

Learn more

]]>
Docker 2023: Milestones, Updates, and What’s Next https://www.docker.com/blog/docker-highlights-2023/ Wed, 20 Dec 2023 14:07:47 +0000 https://www.docker.com/?p=50071 We’ve had an exciting year at Docker, with loads of product news and announcements. Don’t worry if you couldn’t keep up with the pace of our news and product releases. We’ve rounded up highlights from 2023 and look ahead to how we plan to stay the #1 most-used developer tool as we roll into 2024.

banner what you mightve missed from docker in 2023

Docker milestones & performance improvements

Docker Desktop updates

We’ve been hard at work enhancing Docker Desktop this year. Among the notable highlights:

Performance milestones

Read “Docker’s Journey Toward Enabling Lightning-Fast Developer Innovation: Unveiling Performance Milestones” to learn about:

  • 75% startup time speed improvements
  • 85x improvement in upload speed
  • 650% improvement in image download speeds
  • 71% reduction in build time
  • Resource saver mode saves 38,500 CPU hours daily. 

Download the latest Docker Desktop release to take advantage of the performance improvements.

Simplifying software supply chain management

We’ve simplified software supply chain management for developers with Docker Scout. Docker Scout policies enable teams to identify, prioritize, and fix their software quality issues at the point of creation to meet their organization’s reliability and security standards while accelerating the speed of execution and innovation. 

Learn how to achieve security and compliance goals with policy guardrails in Docker Scout. Visit the Docker Scout product page to learn more.

20 new Docker extensions

Twenty new Docker extensions were added to the Docker extension marketplace in 2023. We highlighted a few extensions on the Docker blog, including Kubescape, NebulaGraph, Gefyra, LocalStack, and Grafana. Explore Docker Hub to discover more extensions, and use the Docker Extensions SDK to create and share your own.

New Docker features 

We also announced:

All things AI/ML

2023 will be known as the year of AI/ML. For 2024, our investments in AI promise to bring new services and functionality to Docker customers. Recent announcements include:

Also check out our blog post “Why Are There More Than 100 Million Pull Requests for AI/ML Images on Docker Hub?” to learn how Docker is providing a powerful tool for AI/ML development.

Expanding developer experiences

AtomicJar joins Docker

In December, we were excited to welcome AtomicJar, the makers of Testcontainers, to the Docker family. “Docker already accelerates the ‘inner loop’ app development steps — build, verify (through Docker Scout), run, debug, and share — and now, with AtomicJar and Testcontainers, we’re adding ‘test,’” explains Docker CEO Scott Johnston. As a result, developers using Docker will be able to deliver quality applications faster and with less effort. Read our announcement blog post and FAQ to learn more about AtomicJar and Testcontainers.

Mutagen joins Docker

In June, we announced the acquisition of Mutagen, the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. The Mutagen File Sync feature of Docker Desktop takes file sharing to new heights with up to a 16.5x improvement in performance. To try it and help influence Docker’s future, sign up for the Docker Desktop Preview Program.

Microsoft Dev Box and Docker Desktop

We announced our partnership with the Microsoft Dev Box team to bring additional benefits to developer onboarding, environment set-up, security, and administration with Docker Desktop. You can navigate to the Azure Marketplace to download the Docker Desktop-Dev Box compatible image and start developing in the cloud with a native experience. Additionally, this image can be activated with your current subscription, or you can buy a Docker Business subscription directly on Azure Marketplace.

Docker and Snowflake collaboration

At Snowflake BUILD, we announced Docker Desktop with Snowpark Container Services (private preview). Watch the session to learn more about accelerating deployments of data workloads with Docker and Snowpark. 

Docker in action

Customer highlights from 2023 include:

What’s next

In October at DockerCon, Docker and Udemy announced a partnership to offer developers accessible learning paths to further their Docker education. Read the announcement blog post to learn more about what we’ve planned.

Want to dive deeper into Docker? DockerCon videos are available now on YouTube. 

Do your New Year goals include expanding your Docker expertise? Watch the on-demand webinar Docker Fundamentals: Get the Most Out of Docker.

Check out our public roadmap to help steer the future of Docker.

Thank you to our community of developers, Docker Captains and Community Leaders, customers, and partners! We look forward to our continued work building our future together in the New Year. 

Learn more

]]>
Docker Scout: Securing The Complete Software Supply Chain (DockerCon 2023) nonadult
Using Authenticated Logins for Docker Hub in Google Cloud https://www.docker.com/blog/authenticated-logins-docker-hub-in-google-cloud/ Tue, 19 Dec 2023 15:11:35 +0000 https://www.docker.com/?p=49877 The rise of open source software has led to more collaborative development, but it’s not without challenges. While public container images offer convenience and access to a vast library of prebuilt components, their lack of control and potential vulnerabilities can introduce security and reliability risks into your CI/CD pipeline.

This blog post delves into best practices that your teams can implement to mitigate these risks and maintain a secure and reliable software delivery process. By following these guidelines, you can leverage the benefits of open source software while safeguarding your development workflow.

Using Authenticated Logins for DockerHub in GoogleCloud 2400x1260 1

1. Store local copies of public containers

To minimize risks and improve security and reliability, consider storing local copies of public container images whenever feasible. The Open Containers Initiative offers guidelines on consuming public content, which you can access for further information.

2. Use authentication when accessing Docker Hub

For secure and reliable CI/CD pipelines, authenticating with Docker Hub instead of using anonymous access is recommended. Anonymous access exposes you to security vulnerabilities and increases the risk of hitting rate limits, hindering your pipeline’s performance.

The specific authentication method depends on your CI/CD infrastructure and Google Cloud services used. Fortunately, several options are available to ensure secure and efficient interactions with Docker Hub.

3. Use Artifact Registry remote repositories 

Instead of directly referencing Docker Hub repositories in your build processes, opt for Artifact Registry remote repositories for secure and efficient access. This approach leverages Docker Hub access tokens, minimizing the risk of vulnerabilities and facilitating a seamless workflow.

Detailed instructions on configuring this setup can be found in the following Artifact Registry documentation: Configure remote repository authentication to Docker Hub.

authenticated dockerhub login

4. Use Google Cloud Build to interact with Docker images 

Google Cloud Build offers robust authentication mechanisms to pull Docker Hub images seamlessly within your build steps. These mechanisms are essential if your container images rely on external dependencies hosted on Docker Hub. By implementing these features, you can ensure secure and reliable access to the necessary resources while streamlining your CI/CD pipeline.

Implementing the best practices outlined above offers significant benefits for your CI/CD pipelines. You’ll achieve a stronger security posture and reduced reliability risks, ensuring smooth and efficient software delivery. Additionally, establishing robust authentication controls for your development environments prevents potential roadblocks that could arise later in production. As a result, you can be confident that your processes comply with or surpass corporate security standards, further solidifying your development foundation.

Learn more

Visit the following product pages to learn more about the features that assist you in implementing these steps.

]]>
Maximizing Software Development’s ROI: Forrester’s TEI Study of Docker Business https://www.docker.com/blog/forresters-tei-study-of-docker-business/ Mon, 18 Dec 2023 17:01:30 +0000 https://www.docker.com/?p=49872 Docker’s commitment to empowering developers and organizations is evident in its ongoing investment in the Docker Business subscription, which includes Docker Desktop, Docker Hub, and Docker Scout. Through collaborative efforts with a vibrant user community and customers, Docker has pioneered best practices and innovations that significantly streamline application development workflows. 

Today, Docker Business — Docker’s solution that supports organizations of every size in optimizing DevOps, CI/CD, debugging, and IT processes — marks a significant step in enhancing enterprise development efficiency. The recent Forrester Total Economic Impact™ (TEI) study commissioned by Docker underscores for us the measurable benefits experienced by Docker Business users, including accelerated development agility, reduced time-to-market, and substantial cost savings.

Docker TEI Forrester Study Blog 2400x1260 v2

Maximizing resource efficiency with Docker 

Docker Business transforms the developer experience by simplifying workflows across multiple development phases. While its prowess lies in optimizing DevOps, CI/CD, and IT processes, Docker’s solution does this by consistently supporting a higher quality and more intuitive management experience for seamlessly packaging, distributing, and executing applications across diverse computing environments.

This way of offering a combined developer interface and toolset makes creating containerized applications easier. This approach also reduces complications linked to separate solutions and old-style virtual machines (VMs), making data centers work more efficiently.

Enhanced security and rapid deployment

Security remains a pivotal focus for Docker Business, employing robust measures like isolation and encryption to safeguard applications and data. The streamlined development cycles enabled by Docker Business expedite application deployment and testing, fostering a culture of innovation and agility within enterprises.

Key insights from the Forrester TEI™ Study

Forrester conducted comprehensive interviews with representatives from top global technology manufacturers, consolidated and referred to as a composite organization, uncovering Docker Business’s capacity to tackle issues associated with slow legacy systems and costly VM dependencies. The study also shows compelling statistics highlighting Docker Business’s impact on the composite organization:

  • 6% increase in application developer productivity
  • Improved DevOps engineer-to-developer ratio from 1:20 to 1:60
  • 3x reduction in servers due to increased VM density
  • 3 months faster time-to-market for revenue-generating applications

Embrace Docker Business for transformational outcomes

The transformative potential of Docker Business is evident in its effective resolution of legacy system challenges and dependency on traditional VMs with a secure and flexible development platform built to ensure enterprises, teams, and developers’ success. Docker Business opens the door to remarkable benefits for organizations by enhancing developer velocity, accelerating development agility, reducing time-to-market, and delivering substantial cost savings to the business.

DevOps and IT Productivity: $10.1M Application developer productivity: $18.8M Reduced data center capacity requirement for legacy apps: $3.9M Reduced data center capacity requirement for new apps: $69.9M Net operating profit due to improved time to market of new apps: $17.4M

Download the full Forrester Total Economic Impact™ (TEI) study to learn more about how Docker Business with Docker Desktop, Docker Hub, and Docker Scout fosters a positive total economic impact.

Learn more

]]>
Docker Whale-comes AtomicJar, Maker of Testcontainers https://www.docker.com/blog/docker-whale-comes-atomicjar-maker-of-testcontainers/ Mon, 11 Dec 2023 16:00:18 +0000 https://www.docker.com/?p=49778 We’re shifting testing “left” to help developers ship quality apps faster

I’m thrilled to announce that Docker is whale-coming AtomicJar, the makers of Testcontainers, to the Docker family. With its support for Java, .NET, Go, Node.js, and six other programming languages, together with its container-based testing automation, Testcontainers has become the de facto standard test framework for the developer’s ”inner loop.” Why? The results speak for themselves — Testcontainers enables step-function improvements in both the quality and speed of application delivery.

This addition continues Docker’s focus on improving the developer experience to maximize the time developers spend building innovative apps. Docker already accelerates the “inner loop” app development steps — build, verify (through Docker Scout), run, debug, and share — and now, with AtomicJar and Testcontainers, we’re adding “test.” As a result, developers using Docker will be able to deliver quality applications with less effort, even faster than before.

rectangle atomicjar

Testcontainers itself is a great open source success story in the developer tools ecosystem. Last year, Testcontainers saw a 100% increase in Docker Hub pulls, from 50 million to 100 million, making it one of the fastest-growing Docker Hub projects. Furthermore, Testcontainers has transformed testing at organizations like DoorDash, Netflix, Spotify, and Uber and thousands more.

One of the more exciting things about whale-coming AtomicJar is the bringing together our open source communities. Specifically, the Testcontainers community has deep roots in the programming language communities above. We look forward to continuing to support the Testcontainers open source project and look forward to what our teams do to expand it further.

Please join me in whale-coming AtomicJar and Testcontainers to Docker!

sj

FAQ | Docker Acquisition of AtomicJar

With Docker’s acquisition of AtomicJar and associated Testcontainers projects, you’re sure to have questions. We’ve answered the most common ones in this FAQ.

As with all of our open source efforts, Docker strives to do right by the community. We want this acquisition to benefit everyone — community and customer — in keeping with our developer obsession.

What will happen to Testcontainers Cloud customers?
Customers of AtomicJar’s paid offering, Testcontainers Cloud, will continue while we work to develop new and better integration options. Existing Testcontainers Cloud subscribers will see an update to the supplier on their invoices, but no other billing changes will occur.

Will Testcontainers become closed-source?
There are no plans to change the licensing structure of Testcontainers’s open source components. Docker has always valued the contributions of open source communities.

Will Testcontainers or its companion projects be discontinued?
There are no plans to discontinue any Testcontainers projects.

Will people still be able to contribute to Testcontainers’s open source projects?
Yes! Testcontainers has always benefited from outside collaboration in the form of feedback, discussion, and code contributions, and there’s no desire to change that relationship. For more information about how to participate in Testcontainers’s development, see the contributing guidelines for Java, Go, and .NET.

What about other downstream users, companies, and projects using Testcontainers?
Testcontainers’ open source licenses will continue to allow the embedding and use of Testcontainers by other projects, products, and tooling.

Who will provide support for Testcontainers projects and products?
In the short term, support for Testcontainers’s projects and products will continue to be provided through the existing support channels. We will work to merge support into Docker’s channels in the near future.

How can I get started with Testcontainers?
To get started with Testcontainers follow this guide or one of the guides for a language of your choice:

]]>
Empowering Data-Driven Development: Docker’s Collaboration with Snowflake and Docker AI Advancements https://www.docker.com/blog/docker-collaboration-snowflake-snowpark/ Wed, 06 Dec 2023 22:18:33 +0000 https://www.docker.com/?p=49611 Docker, in collaboration with Snowflake, introduces an enhanced level of developer productivity when you leverage the power of Docker Desktop with Snowpark Container Services (private preview). At Snowflake BUILD, Docker presented a session showcasing the streamlined process of building, iterating, and efficiently managing data through containerization within Snowflake using Snowpark Container Services.

Watch the session to learn more about how this collaboration helps streamline development and application innovation with Docker, and read on for more details. 

Graphic showing white text on blue background that says "BUILD — The Dev Conference for AI & Apps" along with logos for Snowflake and Docker

Docker Desktop with Snowpark Container Services helps empower developers, data engineers, and data scientists with the tools and insights needed to seamlessly navigate the intricacies of incorporating data, including AI/ML, into their workflows. Furthermore, the advancements in Docker AI within the development ecosystem promise to elevate GenAI development efforts now and in the future.

Through the collaborative efforts showcased between Docker and Snowflake, we aim to continue supporting and guiding developers, data engineers, and data scientists in leveraging these technologies effectively.

Accelerating deployment of data workloads with Docker and Snowpark

Why is Docker, a containerization platform, collaborating with Snowflake, a data-as-a-service company? Many organizations lack formal coordination between data and engineering teams, meaning every change might have to go through DevOps, slowing project delivery. Docker Desktop and Snowpark Container Services (private preview) improve collaboration between developers and data teams. 

This collaboration allows data and engineering teams to work together, removing barriers to enable:

  • Ownership by streamlining development and deployment
  • Independence by removing traditional dependence on engineering stacks 
  • Efficiency by reducing resources and improving cross-team coordination

With the growing number of applications that rely on data, Docker is invested in ensuring that containerization supports the changing development landscape to provide consistent value within your organization.

Streamlining Snowpark deployments with Docker Desktop 

Docker Desktop provides many benefits to data teams, including improving data ingestion or enrichment and improving general workarounds when working with a data stack. Watch the video from Snowflake BUILD for a demo showing the power of Docker Desktop and Snowpark Container Services working together. We walk through:

  1. How to create a Docker Image using Docker Desktop to help you drive consistency by encapsulating your code, libraries, dependencies, and configurations in an image.
  2. How to push that image to a registry to make it portable and available to others with the correct permissions.
  3. How to run the container as a job in Snowpark Container Services to help you scale your work with versioning and distributed deployments. 

Using Docker Desktop with Snowpark Container Services provides an enhanced development experience for data engineers who can develop in one environment and deploy in another. For example, with Docker Desktop you can create on an Arm64 platform, yet deploy to Snowpark, an AMD64 platform. This functionality shows multi-platform images, so you can have a great local development environment and still deploy to Snowpark without any difficulty. 

Boosting developer productivity with Docker AI 

In alignment with Docker’s mission to increase the time developers spend on innovation and decrease the time they spend on everything else, Docker AI assists in streamlining the development lifecycle for both development and data teams. Docker AI, available in early access now, aims to simplify current tasks, boosting developer productivity by offering context-specific, automated guidance. 

When using Snowpark Container Services, deploying the project to Snowpark is the next step once you’ve built your image. Leveraging its trained model on Snowpark documentation, Docker AI offers relevant recommendations within your project’s context. For example, it autocompletes Docker files with best practice suggestions and continually updates recommendations as projects evolve and security measures change. 

This marks Docker’s initial phase of aiding the community’s journey in simplifying using big data and implementing context-specific AI guidance across the software development lifecycle. Despite the rising complexity of projects involving vast data sets, Docker AI provides support, streamlining processes and enhancing your experience throughout the development lifecycle.

Docker AI aims to deliver tailored, automated advice during Dockerfile or Docker Compose editing, local docker build debugging, and local testing. Docker AI leverages the wealth of knowledge from the millions of long-time Docker users to autogenerate best practices and recommend secure, updated images. With Docker AI, developers can concentrate more on innovating their applications and less time on tools and infrastructure. Sign up for the Docker AI Early Access Program now.

Improving the collaboration across development and data teams

Our continued investment in Docker Desktop and Docker AI, along with our key collaborators like Snowflake, help you streamline the process of building, iterating, and efficiently managing data through containerization.

Download Docker Desktop to get started today. Check with your admins — you may be surprised to find out your organization is already using Docker! 

Learn more

]]>
Announcing Builds View in Docker Desktop GA https://www.docker.com/blog/announcing-builds-view-in-docker-desktop-ga/ Wed, 06 Dec 2023 14:07:12 +0000 https://www.docker.com/?p=49623 As an engineer in a product development team, your primary focus is innovating new services to push the organization forward. We know how frustrating it is to be blocked because of a failing Docker build or to have the team be slowed down because of an unknown performance issue in your builds.

Due to the complex nature of some builds, understanding what is happening with a build can be tricky, especially if you are new to Docker and containerization.

To help solve these issues, we are excited to announce the new Builds view in Docker Desktop, which provides detailed insight into your build performance and usage. Get a live view of your builds as they run, explore previous build performance, and deep dive into an error and cache issue.

banner builds view ga

What is causing my build to fail?

The Builds view lets you look through recent and past builds to diagnose a failure long after losing the logs in your terminal. Once you have found the troublesome build, you can explore all the runtime context of the build, including any arguments and the full Dockerfile. The UI provides you with the full build log, so you no longer need to go back and re-run the build with --progress=plain to see exactly what happened (Figure 1).

Animated view of Docker build log with failed notification and highlighted error.
Figure 1: A past Docker build’s logs showing an error in one of the steps.

You can see the stack trace right next to the Dockerfile command that is causing the issues, which is useful for understanding the exact step and attributes that caused the error (Figure 2).

Screenshot of Dockerfile showing stack trace under a step that failed.
Figure 2: A view of a Dockerfile with a stack trace under a step that failed.

You can also check whether this issue has happened before or look at what changed to cause it. A jump in run time compared to the baseline can be seen by inspecting previous builds for this project and viewing what changed (Figure 3).

Animated view of build history showing a graph of duration, build steps, and cached steps.
Figure 3: The build history view showing timing information, caching information, and completion status for historic builds of the same image.

What happened to the caching?

We often hear about how someone in the team made a change, impacting the cache utilization. The longer such a change goes unnoticed, the harder it can be to locate what happened and when.

The Builds view plots your build duration alongside cache performance. Now, it’s easy to see a spike in build times aligned with a reduction in cache utilization (Figure 4).

Screenshot showing zoomed in view of build history graph listing platform, status, cache, duration.
Figure 4: Enlarged view of the build history calling out the cache hit ratio for builds of the same image.

You can click on the chart or select from the build history to explore what changed before and after the degradation in performance. The Builds view keeps all the context from your builds, the Dockerfile, the logs, and all execution information (Figure 5).

 Screenshot showing Dockerfile with historic build of image.
Figure 5: An example of a Dockerfile for a historic build of an image that lets you compare what changed over time.

You can even see the commit and source information for the build and easily locate who made the change for more help in resolving the issue (Figure 6).

Animated screenshot showing info view of historic build with four pie charts.
Figure 6: The info view of a historic build of an image showing the location of the Git repository being used and the digest of the commit that was built.

An easier way to manage builders

Previously, users have been able to manage builders from the CLI, providing a flexible method for setting up multiple permutations of BuildKit.

Although this approach is powerful, it would require many commands to fully inspect and manage all the details for your different builders. So, as part of our efforts to continuously make things easier for developers, we added a builder management screen with Docker Desktop (Figure 7).

Screenshot of builder inspection view showing pie chart of storage limit divided into regular, cache mounts, and local sources.
Figure 7: The builder inspection view, showing builder configuration and storage utilization.

All the important information about your builders is available in an easy-to-use dashboard, accessible via the Builds view (or from settings). Now, you can quickly see your storage utilization and inspect the configuration.

Screenshot of Available Builders screen with options to use, start, and remove builders.
Figure 8: Conveniently start, stop, and switch your default builder.

You can also switch your default builder and easily start and stop them (Figure 8). Now, instead of having to look up which command-line options to call, you can quickly select from the drop-down menu.

Get started

The new Builds view is available in the new Docker Desktop 4.26 release; upgrade and click on the new Builds tab in the Dashboard menu.

We are excited about the new Builds view, but this is just the start. There are many more features in the pipeline, but we would love to hear what you think.

Give Builds view a try and share your feedback on the app. We would also love to chat with you about your experience so we can make the best possible product for you.

Update to Docker Desktop 4.26 to get started!

Learn more

]]>
Docker Desktop 4.26: Rosetta, PHP Init, Builds View GA, Admin Enhancements, and Docker Desktop Image for Microsoft Dev Box https://www.docker.com/blog/docker-desktop-4-26/ Wed, 06 Dec 2023 14:06:54 +0000 https://www.docker.com/?p=49434 We’re happy to announce the release of Docker Desktop 4.26, which delivers the latest breakthroughs in Rosetta for Docker Desktop optimization, transforming the Docker experience for all users. The new release also boosts developer productivity by solving common issues such as Node.js freezes and PHP segmentation faults and supercharges performance with speed enhancements and a new view into your Docker Desktop builds.

Read on to learn how Rosetta slashes Linux kernel build times, accelerates PHP projects, and optimizes image building on Apple silicon. Additionally, we are introducing PHP support in Docker Init and enabling administrators to manage access to Docker Desktop Beta and Experimental Features.

Upgrade to Docker Desktop 4.26 and explore these updates, which enable smoother development experiences and seamless containerization for diverse tech stacks.

Graphic showing 4.26 in white text on blue background.

Rosetta for Docker Desktop

Docker Desktop 4.26 ensures a smoother Rosetta for Docker Desktop experience:

  • Node.js freezing for extended periods? Fixed.
  • PHP encountering segmentation faults? Resolved.
  • Programs dependent on chroot? Also addressed.
  • Rosetta hangs on Sonoma 14.0? No more.

Moreover, our team has been hard at work improving Rosetta’s performance in specific scenarios. Consider, for example, building projects like PostHog for both AMD64 and Arm64. Previously clocking in at 17 minutes, it’s now achieved in less than 6 minutes. 

You will now be able to experience the power of Rosetta for Docker Desktop as it reduces Linux kernel build time from 39 minutes to 17 minutes with just 10 CPUs and QEMU. 

PHP and Composer users will discover that building Sylius Standard from scratch now takes only 6 minutes (down from 20) with Docker Desktop’s default configuration on Rosetta.

While building AMD64 images on Apple silicon with Rosetta is faster than ever, native Arm64 images remain the speediest option. Docker Hub hosts a variety of Arm64 images for your preferred language, ensuring fast performance for your projects. 

Introducing PHP support in Docker Init

We want to ensure that we continue making Docker easier for all of our users and all languages. Based on user insights, we’ve launched Docker Init (Beta) — simplifying containerization for various tech stacks. (Read “Docker Init: Initialize Dockerfiles and Compose files with a single CLI command” to learn more.)

Docker Init automatically generates Dockerfiles, Compose YAML, and `.dockerignore` files by detecting your application’s language and runtime specifics. Initially supporting Golang, Python, Node, Rust, and ASP.NET Core, Docker Init offers PHP web application support in Docker Desktop 4.26 (Figure 1).

Screenshot showing welcome text for docker init.
Figure 1: Docker Init showing available languages, now including PHP.

Users can now create Dockerfiles for PHP projects, covering Apache-based web applications using Composer for managing dependencies.

Get started by ensuring you have the latest Docker Desktop version. Then, execute docker init in your project directory through the command line. Let Docker Init handle the heavy lifting, allowing you to concentrate on your core task — building outstanding applications.

Introducing Docker Desktop’s Builds view GA

For engineers focused on innovation, build issues can be a major roadblock. That’s why we’re happy to announce the general availability of the Builds view, offering detailed insights into build performance. Get live updates on your builds, analyze past performance, and troubleshoot errors and cache issues.

The Builds view simplifies troubleshooting by retaining past build data, ensuring you can diagnose failures long after losing terminal logs. Easily explore runtime context, including arguments and the complete Dockerfile. Access the full build log directly from the UI, eliminating the need to re-run builds for a detailed overview (Figure 2).

 The build history view showing timing information, caching information, and completion status for historic builds of the same image.
Figure 2: The build history view showing timing information, caching information, and completion status for historic builds of the same image.

Read the announcement blog post to learn more about the Builds view GA release.

Admin update: Managing access to Docker Beta and Experimental Features 

At Docker, we continuously experiment and deliver the latest features directly into the hands of our users and customers. We’re dedicated to empowering Docker administrators by offering increased control over how these innovations are introduced within their development organizations. Through the flexibility of the admin-settings.json, administrators can now fine-tune feature accessibility (Figure 3).

Screenshot of Docker Desktop showing "Features in Development" with Beta features selected and a comment saying "Beta features are locked by your administrator."
Figure 3: User experience after an administrator has restricted access to Beta features.

This update enables precise customization, allowing admins to align Docker’s Beta and Experimental Features with their organization’s specific requirements. Whether restricting access to individual tabs or implementing comprehensive controls across the board, this enhancement caters to diverse development practices, providing the flexibility needed to optimize the Docker experience for every user (Figure 4).

Screenshot of Docker Desktop showing "Features in Development" with Experimental features selected and a comment saying "Experimental features are locked by your administrator."
Figure 4: User experience after an administrator has restricted access to Experimental features.

Refer to the documentation for more on configuration settings management.

Develop in the cloud with Docker Desktop and Microsoft Dev Box

In addition to running Docker Desktop from the comfort of your personal computer, you can now leverage this familiar experience within the cloud with Microsoft Dev Box. In a Microsoft Ignite session and a recent blog post, developers got their first glimpse of how easy it can be to create containers in the cloud with Docker Desktop and Microsoft Dev Box.  

We invite you to navigate to the Azure Marketplace to download the public preview of the Docker Desktop-Dev Box compatible image and start developing in the cloud with a native experience. Additionally, this image can be activated with your current subscription, or you can buy a Docker Business subscription directly on Azure Marketplace.

Conclusion

Stay tuned for more groundbreaking developments and optimizations to streamline your Docker experience. Your feedback fuels our progress, and we’re committed to delivering solutions that simplify development and empower every user.

Upgrade to Docker Desktop 4.26 to explore these updates and experiment with Docker’s latest features.

Learn more

]]>
Announcing the Docker AI/ML Hackathon 2023 Winners https://www.docker.com/blog/announcing-the-docker-ai-ml-hackathon-2023-winners/ Tue, 05 Dec 2023 16:44:46 +0000 https://www.docker.com/?p=49567 The week of DockerCon 2023 in Los Angeles, we announced the kick-off of the Docker AI/ML Hackathon. The hackathon ran as a virtual event from October 3 to November 7 with support from partners including DataStax, Livecycle, Navan.ai, Neo4j, and OctoML. Leading up to the submission deadline, we ran a series of webinars on topics ranging from getting started with Docker Hub to setting up computer vision AI models on Docker, and more. You can watch the collection of webinars on YouTube.

banner hackathon announcement winners

The Docker AI/ML Hackathon encouraged participants to build solutions that were innovative, applicable in real life, use Docker technology, and have an impact on developer productivity. We made a lot of announcements at DockerCon, including the new GenAI Stack, and we couldn’t wait to see how developers would put this to work in their projects.  

Participants competed for US$ 20,000 in cash prizes and exclusive Docker swag. Judging was based on criteria such as applicability, innovativeness, incorporation of Docker tooling, and impact on the developer experience and productivity. Read on to learn who took home the top prizes.

The winners

1st place

Signal0ne — This project automates insights from failed containers and anomalous resource usage through anomaly detection algorithms and a Docker desktop extension. Developed using Python and Angular, the Signal0ne tool provides rapid, accurate log analysis, even enabling self-debugging. The project’s key achievements include quick issue resolution for experienced engineers and enhanced debugging capabilities for less experienced ones.

2nd place

SeamlessML: Docker-Powered Serverless Model Orchestration — SeamlessML addresses the AI model deployment bottleneck by providing a simplified, scalable, and cost-effective solution. Leveraging Docker and serverless technologies, it enables easy deployment of machine learning models as scalable API endpoints, abstracting away complexities like server management and load balancing. The team successfully reduced deployment time from hours to minutes and created a local testing setup for confident cloud-like deployments.

3rd place

Dionysus — Dionysus is a developer collaboration platform that streamlines teamwork through automatic code documentation, efficient codebase search, and AI-powered meeting transcription. Built with a microservice architecture using NextJS for the frontend and a Python backend API, Docker containerization, and integration with GitHub, Dionysus simplifies development workflows. The team overcame challenges in integrating AI effectively, ensuring real-time updates and creating a user-friendly interface, resulting in a tool that automates code documentation, facilitates contextual code search, and provides real-time AI-driven meeting transcription.

Honorable mentions

The following winners took home swag prizes. We received so many fantastic submissions that we awarded honorable mentions to four more teams than originally planned!

What’s next?

Check out all project submissions on the Docker AI/ML Hackathon gallery page. Also, check out and contribute to the GenAI Stack project on GitHub and sign up to join the Docker AI Early Access program. We can’t wait to see what projects you create.

We had so much fun seeing the creativity that came from this hackathon. Stay tuned until the next one!

Learn more

]]>
Accelerating Developer Velocity with Microsoft Dev Box and Docker Desktop https://www.docker.com/blog/microsoft-devbox-and-docker-desktop/ Thu, 16 Nov 2023 22:45:00 +0000 https://www.docker.com/?p=48901 Building a foundation of structure and stability is paramount for the success of any development team, regardless of its size. It’s the key to unlocking velocity, ensuring top-notch quality, and maximizing the return on your investments in developer tools. Recognizing the pivotal role in simplifying application development, we’re taking another leap forward, announcing our partnership with the Microsoft Dev Box team to bring additional benefits to developer onboarding, environment set-up, security, and administration with Docker Desktop

Today at Microsoft Ignite, Microsoft’s Anthony Cangialosi and Sagar Lankala shared how Microsoft Dev Box and Docker Desktop can release developers’ reliance on physical workstations and intricate, hard-to-deploy application infrastructures. This collaborative effort focuses on streamlining onboarding to new projects while bolstering security and efficiency.

banner vd docker at microsoft ignite

Consider the positive impact: 

  • Improved developer productivity: Before this collaboration, setting up the development environment consumed valuable developer time. Now, with Docker and Microsoft’s collaboration, the focus shifts to boosting developer efficiency and productivity and concentrating on meaningful work rather than setup and configuration tasks.
  • Streamlined administration: Previously, developers had to individually download Docker Desktop as a crucial part of their dev toolkit. Now, it’s possible to pre-configure and install Desktop, streamlining administrative tasks.
  • Security at scale: Previously, acquiring necessary assets meant developers had to navigate internal or external sources. With our solution, you can ensure the requisite images/apps are readily available, enhancing security protocols.

Together, we’re delivering a turnkey solution designed to empower individual developers, small businesses, and enterprise development teams. This initiative is poised to expedite project onboarding, facilitating quick dives into new endeavors with unparalleled ease. Join us on this journey toward enhanced efficiency, productivity, and a smoother development experience.

We invite you to navigate to the Azure Marketplace to download the Docker Desktop-Dev Box compatible image and start developing in the cloud with a native experience. Additionally, this image can be activated with your current subscription, or you can buy a Docker Business subscription directly on Azure Marketplace.

Learn more

]]>