S3 Access Path Deprecation

By: Michael Ruxsaksriskul, Cloud Engineer

Thanks for joining us for another session of the JHC Technology Engineer’s Corner. Today's topic is the first in a series on strategies and mitigation activities related to the recently announced deprecation of the older path-based Amazon S3 access model. You can read more on this in Jeff Barr’s AWS blog, I highly recommend reading it prior to reading this blog. 

In the moving ahead section of Jeff’s blog, one of the recommendations is “Identifying Path-Style References” for your applications by using and reviewing S3 access logs. All fine and dandy, but since S3 access logs are not enabled by default, we could be facing a sea of potential buckets without logging enabled. So how do we begin navigating these uncharted waters and have visibility in our environment? You can potentially use a AWS solution like AWS Config to audit your S3 buckets logging configuration. Yay great! But wait what if AWS Config is not an approved solution in your organization? What if you are in a time crunch and you don’t have time to ramp up to AWS Config expert level? 

Well you’re in luck, we have a life buoy in the AWS CLI, our Terminal and existing scripting tools. We’ll leverage these tools to automate getting some visibility. I’m going to cover an approach with the CLI, Shell and a very simple Shell script. 



AWS CLI installed and configured

IAM user or role with access to S3 on the target AWS account

Let’s first generate a list of buckets in your account by executing the S3api list buckets command:

Our output looks like this:

Note the output is in one line this is because of the --query parameter. This output may not be helpful when piped to a file or if we need to iterate through the items. Let’s see what we can do to make this easier to work with. We’ll start by putting the output field of the bucket name in brackets [] this will output each returned value in a single line. Read more on configuring CLI output here. Let’s try our modified commands again:

Now our output is in a format we can easily pipe to a list and iterate through later:

So we now have an inventory of buckets, what’s next? Well let’s start by querying our list to confirm the logging status of each bucket with get-bucket-logging command. We can accomplish this by piping the output of our previous command as an argument to our next command using xargs:

We see an iteration of the command per bucket:

When our command successfully queries a bucket with logging enabled it will provide the log status, the target bucket and prefix where logs are being sent to:

So our command is running along and hits a snag! The command output shows we don’t have access to check the log status on a particular bucket no additional output continues after:

We can get around this by wrapping our commands in a shell loop and echoing the bucket value along with the status of the CLI with echo $? as a tracker. You can read about echoing CLI return codes here:

So now we have output that list the bucket and CLI status per query and we can see which query failed with 255 the return code for command failed. The remainder of the list continues being iterated through because of our use of a for loop:

The above shell command can easily be ported to a script like this:

Hopefully equipped with the above CLI commands, shell commands and script you can build a list of buckets that don’t have logging enabled. You’ll be able to confirm which buckets do have logging enabled and exclude them from your list. You’ll also know where existing logs are being shipped to in your environment. Join us next time where we’ll walk through taking the list of buckets without logging enabled and automate enabling logging for them to a centralized bucket. After we confirm logging is enabled in all our desired bucket we’ll automate determining if applications are accessing the objects with the soon to be deprecated path-based access model. Until next time, Happy Clouding!

If you or your organization has more questions in regards to these changes in S3, reach out to [email protected] to set up some time for a chat.

Recent Posts

See All

A JHC Technology Rundown on re:Invent Part 2

By: Mike Atkinson, Senior Cloud Engineer Members of the engineering team had the opportunity to attend Amazon Web Services’ annual re:Invent conference in Las Vegas. Every year, AWS announces dozens o

  • Twitter
  • Facebook
  • Black LinkedIn Icon
  • YouTube

163 Waterfront Street, Suite 450

National Harbor, MD 20745


©2019 by JHC Technology. Proudly created with Wix.com

DUNS: 961809790 | CAGE Code: 5YRC8 | NAICS Codes: 423430, 518210, 541511, 541512, 541513, 541519, 541990