Recently, I ran a poll on twitter asking how people interacted with boto3, the AWS Python SDK (why is called boto3? See the end of the article for an appendix on this). What I wanted to know is how many people used boto3 sessions, and how many people use the module-level functions. I asked which style people use:
s3 = boto3.client('s3')
ddb = boto3.resource('dynamodb')
or
session = boto3.Session()
s3 = session.client('s3')
ddb = session.resource('dynamodb')
The split ended up being about 70% in favor of the first option. In this article I’ll share why most application and library code I write uses the second, though when I’m writing an ad hoc script or in the Python REPL, I often use the first. I’ll also explain a library I wrote that helps make programmatic role assumption with boto3 simpler, using sessions.
To start, let’s talk about how boto3 works, and what a session is. If you know this, you can skip this section.
Each AWS service API (well, each service identifier; multiple service identifiers may belong to a single branded service, like iot and iot-data are API identifiers within AWS IoT Core) gets a client, which provides the API interface. We’ll set aside “service resources” for simplicity, but everything we’ll talk about applies equally to them.
An excellent “Hello World” for boto3 is the following:
import boto3
sts = boto3.client('sts')
print(sts.get_caller_identity())
The STS.GetCallerIdentity API returns the account and IAM principal (IAM user or assumed role) of the credentials used to call it. It’s a good way to confirm what identity you’re using, and additionally it does not require permissions, so it will work with any valid credentials. (You can also called with the CLI using aws sts get-caller-identity
, and for a more user-friendly wrapper, see aws-whoami).
What happens when you call boto3.client()
? Let’s look at the code:
def client(*args, **kwargs):
return _get_default_session().client(*args, **kwargs)
_get_default_session()
is a caching function for the field boto3.DEFAULT_SESSION
, which is an object of the type boto3.Session
. So the function boto3.client()
is really just a proxy for the boto3.Session.client()
method. So what is a session, then?
The boto3.Session class, according to the docs, “ stores configuration state and allows you to create service clients and resources.” Most importantly it represents the configuration of an IAM identity (IAM user or assumed role) and AWS region, the two things you need to talk to an AWS service. There’s a wealth of other configuration inside, but conceptually, think of it that way. If you’ve got credentials and need to talk to two regions? Use two sessions. Same region, but different credentials? Different sessions.
Note that a session does not correspond to other notions of “session” you may have in your code. A Lambda function instance has the same identity and region throughout its life, so each invocation would not need a new session (you can create your session during function initialization). A web server that is using the same credentials and region for all requests would use the same session for all callers.
There are three main ways to create a session (Session class constructor docs here). One is directly with a set of IAM credentials (e.g., IAM user credentials) and a region. Another is with the profile_name
keyword argument, which will pull the configuration from a profile in ~/.aws/config
and/or ~/.aws/credentials
(I’ve got an explainer on those files here). The third is to create a session with no inputs, and let it search for the configuration in a number of places.
Going back to boto3.client()
, the code for _get_default_session()
is the following:
def _get_default_session():
if DEFAULT_SESSION is None:
setup_default_session()
return DEFAULT_SESSION
and the code for boto3.setup_default_session()
looks like (skipping the detail of global
):
def setup_default_session(**kwargs):
DEFAULT_SESSION = Session(**kwargs)
So, if we go back to our “Hello World”:
import boto3
sts = boto3.client('sts')
print(sts.get_caller_identity())
The STS client is created on a session created with no arguments. What happens in that case? The session goes through a chain of configuration sources to find credentials, region, and other configuration. You can see details in the boto3 docs here, though it fails to mention that at the bottom of the chain are container and EC2 instance credentials, which will get picked up as well. Note that even if credentials aren’t found, or the configuration isn’t complete, the session will not raise an error. The session only actually resolves credentials, etc. when they are needed (so if there aren’t credentials to be found, it’s the sts.get_caller_identity()
line that will raise an exception).
My argument is that when you’re writing application or library code (as opposed to short, one-off scripts), you should always use a session directly, rather than using the module level functions. To see why, consider the following function, that retrieves a name from a DynamoDB table:
def greet(table_name, user_id):
ddb = boto3.resource('dynamodb')
table = ddb.Table(table_name)
item = table.get_item(Key={'id': user_id})
print('Hello {}'.format(item['Item']['name']))
What happens if I want to use this function in a single script, but with two different tables in different regions? I could add a parameter:
def greet(table_name, user_id, region=None):
ddb = boto3.resource('dynamodb', region_name=region)
table = ddb.Table(table_name)
item = table.get_item(Key={'id': user_id})
print('Hello {}'.format(item['Item']['name']))
What happens if I want to use this function in a single script, but with two different sets of credentials? The Session class exists to encapsulate all this configuration. So something a bit better would look like:
def greet(session, table_name, user_id):
ddb = session.resource('dynamodb', region_name=region)
table = ddb.Table(table_name)
item = table.get_item(Key={'id': user_id})
print('Hello {}'.format(item['Item']['name']))
Now, it may be inconvenient to force the user to pass in a session, especially if it’s a library that may be used by people who aren’t familiar with sessions. So something like this may be more appropriate:
def greet(table_name, user_id, session=None):
if not session:
session = boto3._get_default_session()
ddb = session.resource('dynamodb', region_name=region)
table = ddb.Table(table_name)
item = table.get_item(Key={'id': user_id})
print('Hello {}'.format(item['Item']['name']))
This allows a caller to provide a session if they want, but falls back to the default otherwise. (Normally I would avoid accessing a private module function, but I expect this one in particular to be stable and honestly it should be public anyway.) If all of your code is written this way, then the session can be passed to any further functions this function calls.
This also allows for test frameworks to more easily control either the credentials/region that are used for testing, or even to mock out the creation of clients, etc.
A consequence here is that in a Lambda function, if you’re only making API calls from the handler function itself, there’s not much need for the session, but if you start to modularize your code into separate Python functions and classes, they should take sessions as input, and thus you should be creating a session in your handler — in your function initialization code, not per invocation (also in your initialization, create sessions for any assumed roles you use — but see below for how to make that work properly).
You should also use sessions for Python scripts you run from the CLI. It’s good practice to take a --profile
parameter, just like the AWS CLI. If it’s omitted, the session will again search for the configuration as mentioned above.
parser = argparse.ArgumentParser()
# other args...
parser.add_argument('--profile')
args = parser.parse_args()session = boto3.Session(profile_name=args.profile)# body of the script, using the session...
Even in interactive Python sessions (the REPL or a notebook), creating sessions directly can be helpful. For example, if you don’t have a default profile (a strategy I recommend if you have many accounts/roles/regions) and no other credentials set, if you call boto3.client()
(and thus initialize the default session), the default session will be stuck without credentials, and you’ll either have to clear it directly with boto3.DEFAULT_SESSION = None
or restart your Python session. On the other hand, if you had just created a session with session = boto3.Session()
, you could follow it up with session = boto3.Session(profile_name='my-profile')
to get a session pointing to a particular profile.
A place where you need to create a session is with programmatic role assumption. When you’re using profiles, you can do something like
[profile my-base-profile]
# Set up in whatever your usual fashion is[profile my-assumed-role-profile]
role_arn = arn:aws:iam::123456789012:role/MyRoleToAssume
source_profile = my-base-profile# or on EC2 instance/ECS, you might do one of:
# credential_source = Ec2InstanceMetadata
# credential_source = EcsContainer
Then, in your code (or the CLI), you can use my-assumed-role-profile
, and it will take care of assuming the role for you. With boto3:
base_session = boto3.Session(profile_name='my-base-profile')
print(base_session.client('sts').get_caller_identity())
# will show your normal identity# independent of the above code
assumed_role_session = boto3.Session(profile_name='my-assumed-role-profile')
print(assumed_role_session.client('sts').get_caller_identity())
# will show that you're using MyRoleToAssume
This is very handy. boto3 actually knows when the credentials for the assumed role session expire, and if you use the session after that, the session will call AssumeRole again to refresh the credentials.
But you can’t do the profile trick, for example, in a Lambda function. So instead, I often see folks doing something like the following:
sts = boto3.client('sts')
response = sts.assume_role(
RoleArn='arn:aws:iam::123456789012:role/MyRoleToAssume',
RoleSessionName='my-session'
)
credentials = response['Credentials']
assumed_role_session = boto3.Session(
aws_access_key_id=credentials["AccessKeyId"],
aws_secret_access_key=credentials["SecretAccessKey"],
aws_session_token=credentials["SessionToken"],
region='people vary a ton in how they set this'
)
Sometimes people also create clients for the assumed role directly using boto3.client()
with the credentials as inputs.
This does not handle credential expiration (that session or client will fail after those particular credentials expire), which may not matter for a short-running script, but it does mean that a Lambda function instance cannot use that session for the duration of its existence, which I’ve seen lead people to making an assume role call in every invocation. I also think the above code is just very tedious to deal with!
I wrote a library, aws-assume-role-lib, to help with that. It uses the same code from boto3 (botocore, actually) that the assumed-role-profile setup uses. So now your code can look like this:
base_session = boto3.Session()
# or any other config for a session, e.g.
# base_session = boto3.Session(profile_name='my-base-profile')assumed_role_session = aws_assume_role_lib.assume_role(session, 'arn:aws:iam::123456789012:role/MyRoleToAssume')
assume_role()
takes all the other parameters for AssumeRole, if you want to specify those.
You can even then chain these sessions; you can call aws_assume_role_lib.assume_role()
with the assumed_role_session
to assume another role from there. And you don’t need to worry about the credential refreshing. In a Lambda function, you’d put the above code outside your handler, run during function initialization, and both sessions will be valid for the life of the function instance.
You may notice that the session is required. I went back and forth on making it optional, but I settled on promoting session-centric code.
If you really prefer the module-level function style, you can get that, too. Or as a method on session objects! Just call aws_assume_role_lib.patch_boto3()
first.
aws_assume_role_lib.patch_boto3()assumed_role_session = boto3.assume_role('arn:aws:iam::123456789012:role/MyRoleToAssume')# or
base_session = boto3.Session()
assumed_role_session = base_session.assume_role( 'arn:aws:iam::123456789012:role/MyRoleToAssume')
If you’re writing a command line tool in Python, my recommendation is to provide an optional --profile
argument (like the AWS CLI), and use it to create the session. If they haven’t provided it, it will be None
, and the session will search for credentials in the usual ways. For example:
import argparse
import boto3parser = argparse.ArgumentParser()parser.add_argument('--profile', help='Use a specific AWS config profile')args = parser.parse_args()session = boto3.Session(profile_name=args.profile_name)# use the session
This allows your command to have parity with the AWS CLI for configuring which credentials it should be using.
Hopefully I’ve helped illuminate what sessions are, why they’re useful, and why you should probably switch to a session-first coding style, reserving use of the module-level functions for creating clients and resources at most for when you’re writing a quick script or in an interactive Python session.
As always, if you’ve got questions or comments, hit me up on Twitter.
APPENDIX: Why is the AWS Python SDK called “boto3”?
As so often happens, an AWS customer had to write something because AWS hadn’t made it themselves. That customer was Mitch Garnaat, and he started a project called “boto” in mid-2006, just months after AWS was launched. It’s named after a freshwater dolphin native to the Amazon river.
The boto library went through two major versions, but there was a fundamental scalability problem: every service needed to have its implementation written up by a human, and as you can guess, the pace of feature releases from AWS makes that unsustainable.
By 2012, Mitch had joined AWS, bringing boto with him, and a complete change was in the works, with folks like James Saryerwinnie working on it: the AWS CLI and the 3rd major version of boto. But the change was so drastic, it became a different library altogether, boto3: all services were defined by config files, that allow the service clients to be generated programmatically (and indeed, they are generated at runtime, when you first ask for a service client!). The underlying functionality was packaged into a separate library, botocore, that also powers the AWS CLI (which replaced a mishmash of separate CLI tools from different AWS services; Eric Hammond even once wrote a tool whose sole purpose was to install all the different CLIs).
These service definitions are used across all the SDKs. You can see them in botocore, and in fact, updates to those definitions (there and in other SDKs) is often a place new services and features leak out first (AWS Managed IAM Policies are another good place for that).
Surprisingly, the last update to the original boto library was in July 2018, and there are even commits from 2019 in the repo!