09-11-2021, 12:12 AM
![[Image: drivemaker-s3-ftp-sftp-drive-map-mobile.png]](https://doctorpapadopoulos.com/images/drivemaker-s3-ftp-sftp-drive-map-mobile.png)
I often use the AWS CLI to manage S3 objects because it gives me direct control over my buckets and objects without jumping into the AWS web console. It allows for scripts, automation, and a more streamlined workflow. Instead of clicking through menus, I can type commands quickly, making my life way easier.
First off, I make sure to have the AWS CLI installed and configured with my credentials. You can set that up using the command "aws configure". It prompts you for your Access Key ID, Secret Access Key, region, and output format. After you input that information, you’re all set to run commands against S3.
To start managing objects, I always check my S3 buckets using "aws s3 ls". This command lists all the buckets in my account, staring right at the names of what I’ve set up. If you want to see the contents of a specific bucket, you can do "aws s3 ls s3://my-bucket". This command gives you not just the bucket's name but also lists all the objects inside it along with details like size and last modified date. It’s an instant way to visualize what’s in your bucket.
Deploying objects is something I frequently do. Uploading files can be achieved with "aws s3 cp". I tend to use it as "aws s3 cp local-file.txt s3://my-bucket/". This command copies a file from my local machine to the specified S3 bucket. A cool trick is adding the "--acl" flag to manage access permissions right during the upload. For example, "--acl public-read" on a public file is a quick way to set the access level at the point of upload.
If you’re dealing with folders, you can just upload the entire directory with "aws s3 cp my-folder/ s3://my-bucket/ --recursive". The "--recursive" flag ensures all files within the directory, including subdirectories, get uploaded, which is a huge time saver.
Deleting objects isn’t clunky either. If you need to remove a specific object, a simple command like "aws s3 rm s3://my-bucket/some-object.txt" will do the job. If you’re ever in a situation where you want to delete multiple files, you can use the "--recursive" option here too. Just bring it into your "aws s3 rm my-folder/ s3://my-bucket/ --recursive" command to wipe out an entire folder worth of objects.
Managing permissions is vital. I often adjust bucket policies using JSON directly through the CLI. The command "aws s3api put-bucket-policy --bucket my-bucket --policy file://policy.json" allows me to apply policies without fussing over the web UI. You can adjust things like read or write permissions depending on the role of whoever accesses the bucket.
If I need quick stats about the objects in a bucket, I usually execute "aws s3api list-objects --bucket my-bucket". It pulls a detailed list of objects with their metadata, which is excellent for audits or just understanding what I have stored at any point.
Another thing I find useful is versioning. If I enable versioning on my S3 bucket using "aws s3api put-bucket-versioning --bucket my-bucket --versioning-configuration Status=Enabled", I can track different versions of my files without hassle. This way, if a crucial file accidentally gets overwritten, I can retrieve an earlier version using "aws s3api list-object-versions --bucket my-bucket". That lists all versions associated with each object in the bucket.
To move objects around between buckets or folders, I’ll use "aws s3 mv". Suppose I want to transfer a file from one bucket to another; I simply run "aws s3 mv s3://source-bucket/some-object.txt s3://target-bucket/". It handles the deletion from the source after the copy.
For retrieval, I make use of the "s3 cp" command again. If you want to download an object, it’s as simple as "aws s3 cp s3://my-bucket/some-object.txt ./local-file.txt". I find it particularly powerful for large files since I can incorporate the "--expected-size" flag to help with checksum validation, ensuring I'm not downloading corrupted data.
In terms of automation, I often utilize the AWS CLI within scripts to streamline my operations. When launching a batch job, I can include AWS commands to handle S3 data directly as part of my workflow. I write a shell script, and AWS CLI commands become just another part of it. For example, if I’ve uploaded new data, I can trigger further processing right after with commands like "aws s3 sync" to synchronize local directories with S3.
Look into "aws s3 sync local-directory/ s3://my-bucket/" to mirror your local folder's contents to the bucket. The amazing part is it only transfers the changes, saving both time and resources. This is particularly helpful when you're processing logs or images that keep getting updated.
Also, there are a few neat options worth mentioning like "--dryrun". If I'm unsure or cautious about what a certain command will do, I always run it first with "--dryrun". This simulation shows me what will be processed without making any actual changes. Say I want to check what files would be deleted by running "aws s3 rm s3://my-bucket --recursive --dryrun". It gives me peace of mind before I hit the actual delete.
I frequently automate backups of important data using scheduled scripts that utilize the AWS CLI. I set up a cron job that runs a script every night, syncing data from my local environment to S3. Having a reliable backup strategy ensures I’m never left in the lurch due to data loss.
Additionally, I sometimes need to handle multipart uploads for large files. If a file exceeds 5GB, I break it down into smaller chunks. The command for starting a multipart upload is "aws s3api create-multipart-upload --bucket my-bucket --key large-file.zip". After that, I'll upload each part consecutively and then complete the upload using the respective commands.
Lastly, I encourage you to familiarize yourself with the various options and flags that are available across commands. It’s all about streamlining your workflow. AWS CLI has extensive documentation, and it’s worth rummaging through to find hidden gems that can make your life easier, like file filtering with "--exclude" and "--include", to fine-tune what gets uploaded or downloaded.
Really, with AWS CLI, you harness the power of your S3 buckets and objects with precision.