AWS CLI S3 A client error (403) occurred when calling the HeadObject operation: Forbidden
I'm trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.
aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
This script works perfectly on my local machine but fails with the following error on the Amazon Image:
2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign:
HEAD
Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
total_files, total_parts = self._enqueue_tasks(files)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
for filename in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
for file_base in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
for src_path, extra_information in file_iterator:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
yield self._list_single_object(s3_path)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
response = self._client.head_object(**params)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
model=operation_model, context=request_context
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden
However, when I run it with the --no-sign-request
option, it works perfectly:
aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
Can someone please explain what is going on?
amazon-web-services amazon-s3 aws-cli
add a comment |
I'm trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.
aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
This script works perfectly on my local machine but fails with the following error on the Amazon Image:
2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign:
HEAD
Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
total_files, total_parts = self._enqueue_tasks(files)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
for filename in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
for file_base in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
for src_path, extra_information in file_iterator:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
yield self._list_single_object(s3_path)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
response = self._client.head_object(**params)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
model=operation_model, context=request_context
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden
However, when I run it with the --no-sign-request
option, it works perfectly:
aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
Can someone please explain what is going on?
amazon-web-services amazon-s3 aws-cli
2
It looks like you're (maybe implicitly) using the instance's IAM role to make the request (that would explainx-amz-security-token
-- temporary credentials from the role) and your role denies access to S3... or the bucket (not yours, I take it?) doesn't allow access with credentials -- though if it's public, that's strange. As always, make sure your system clock is correct, since withHEAD
the error body is always suppressed.
– Michael - sqlbot
Mar 22 '16 at 1:55
Hi, thank you for the quick response. The bucket that i'm trying access is, indeed, public. Not sure why it is complaining about a signed request then. It fails with similar error on my own bucket as well without the --no-sign-request option.
– MojoJojo
Mar 22 '16 at 2:01
You do have an IAM role on this instance, right? It sounds as if that role may be restricting things, perhaps in unexpected ways.
– Michael - sqlbot
Mar 22 '16 at 2:14
add a comment |
I'm trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.
aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
This script works perfectly on my local machine but fails with the following error on the Amazon Image:
2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign:
HEAD
Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
total_files, total_parts = self._enqueue_tasks(files)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
for filename in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
for file_base in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
for src_path, extra_information in file_iterator:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
yield self._list_single_object(s3_path)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
response = self._client.head_object(**params)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
model=operation_model, context=request_context
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden
However, when I run it with the --no-sign-request
option, it works perfectly:
aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
Can someone please explain what is going on?
amazon-web-services amazon-s3 aws-cli
I'm trying to setup a Amazon Linux AMI(ami-f0091d91) and have a script that runs a copy command to copy from a S3 bucket.
aws --debug s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
This script works perfectly on my local machine but fails with the following error on the Amazon Image:
2016-03-22 01:07:47,110 - MainThread - botocore.auth - DEBUG - StringToSign:
HEAD
Tue, 22 Mar 2016 01:07:47 GMT
x-amz-security-token:AQoDYXdzEPr//////////wEa4ANtcDKVDItVq8Z5OKms8wpQ3MS4dxLtxVq6Om1aWDhLmZhL2zdqiasNBV4nQtVqwyPsRVyxl1Urq1BBCnZzDdl4blSklm6dvu+3efjwjhudk7AKaCEHWlTd/VR3cksSNMFTcI9aIUUwzGW8lD9y8MVpKzDkpxzNB7ZJbr9HQNu8uF/st0f45+ABLm8X4FsBPCl2I3wKqvwV/s2VioP/tJf7RGQK3FC079oxw3mOid5sEi28o0Qp4h/Vy9xEHQ28YQNHXOBafHi0vt7vZpOtOfCJBzXvKbk4zRXbLMamnWVe3V0dArncbNEgL1aAi1ooSQ8+Xps8ufFnqDp7HsquAj50p459XnPedv90uFFd6YnwiVkng9nNTAF+2Jo73+eKTt955Us25Chxvk72nAQsAZlt6NpfR+fF/Qs7jjMGSF6ucjkKbm0x5aCqCw6YknsoE1Rtn8Qz9tFxTmUzyCTNd7uRaxbswm7oHOdsM/Q69otjzqSIztlwgUh2M53LzgChQYx5RjYlrjcyAolRguJjpSq3LwZ5NEacm/W17bDOdaZL3y1977rSJrCxb7lmnHCOER5W0tsF9+XUGW1LMX69EWgFYdn5QNqFk6mcJsZWrR9dkehaQwjLPcv/29QcM+b5u/0goazCtwU=
/aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm
2016-03-22 01:07:47,111 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [HEAD]>
2016-03-22 01:07:47,111 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - INFO - Starting new HTTPS connection (1): aws-codedeploy-us-west-2.s3.amazonaws.com
2016-03-22 01:07:47,151 - MainThread - botocore.vendored.requests.packages.urllib3.connectionpool - DEBUG - "HEAD /latest/codedeploy-agent.noarch.rpm HTTP/1.1" 403 0
2016-03-22 01:07:47,151 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '0mRvGge9ugu+KKyDmROm4jcTa1hAnA5Ax8vUlkKZXoJ//HVJAKxbpFHvOGaqiECa4sgon2F1kXw=', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': '6204CD88E880E5DD', 'date': 'Tue, 22 Mar 2016 01:07:46 GMT', 'content-type': 'application/xml'}
2016-03-22 01:07:47,152 - MainThread - botocore.parsers - DEBUG - Response body:
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.HeadObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f421075bcd0>
2016-03-22 01:07:47,152 - MainThread - botocore.retryhandler - DEBUG - No retry needed.
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <function enhance_error_msg at 0x7f4211085758>
2016-03-22 01:07:47,152 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.HeadObject: calling handler <awscli.errorhandler.ErrorHandler object at 0x7f421100cc90>
2016-03-22 01:07:47,152 - MainThread - awscli.errorhandler - DEBUG - HTTP Response Code: 403
2016-03-22 01:07:47,152 - MainThread - awscli.customizations.s3.s3handler - DEBUG - Exception caught during task execution: A client error (403) occurred when calling the HeadObject operation: Forbidden
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 100, in call
total_files, total_parts = self._enqueue_tasks(files)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/s3handler.py", line 178, in _enqueue_tasks
for filename in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/fileinfobuilder.py", line 31, in call
for file_base in files:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 142, in call
for src_path, extra_information in file_iterator:
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 314, in list_objects
yield self._list_single_object(s3_path)
File "/usr/local/lib/python2.7/site-packages/awscli/customizations/s3/filegenerator.py", line 343, in _list_single_object
response = self._client.head_object(**params)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 228, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/client.py", line 488, in _make_api_call
model=operation_model, context=request_context
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python2.7/site-packages/awscli/errorhandler.py", line 70, in __call__
http_status_code=http_response.status_code)
ClientError: A client error (403) occurred when calling the HeadObject operation: Forbidden
2016-03-22 01:07:47,153 - Thread-1 - awscli.customizations.s3.executor - DEBUG - Received print task: PrintTask(message='A client error (403) occurred when calling the HeadObject operation: Forbidden', error=True, total_parts=None, warning=None)
A client error (403) occurred when calling the HeadObject operation: Forbidden
However, when I run it with the --no-sign-request
option, it works perfectly:
aws --debug --no-sign-request s3 cp s3://aws-codedeploy-us-west-2/latest/codedeploy-agent.noarch.rpm .
Can someone please explain what is going on?
amazon-web-services amazon-s3 aws-cli
amazon-web-services amazon-s3 aws-cli
asked Mar 22 '16 at 1:36
MojoJojoMojoJojo
1,37711632
1,37711632
2
It looks like you're (maybe implicitly) using the instance's IAM role to make the request (that would explainx-amz-security-token
-- temporary credentials from the role) and your role denies access to S3... or the bucket (not yours, I take it?) doesn't allow access with credentials -- though if it's public, that's strange. As always, make sure your system clock is correct, since withHEAD
the error body is always suppressed.
– Michael - sqlbot
Mar 22 '16 at 1:55
Hi, thank you for the quick response. The bucket that i'm trying access is, indeed, public. Not sure why it is complaining about a signed request then. It fails with similar error on my own bucket as well without the --no-sign-request option.
– MojoJojo
Mar 22 '16 at 2:01
You do have an IAM role on this instance, right? It sounds as if that role may be restricting things, perhaps in unexpected ways.
– Michael - sqlbot
Mar 22 '16 at 2:14
add a comment |
2
It looks like you're (maybe implicitly) using the instance's IAM role to make the request (that would explainx-amz-security-token
-- temporary credentials from the role) and your role denies access to S3... or the bucket (not yours, I take it?) doesn't allow access with credentials -- though if it's public, that's strange. As always, make sure your system clock is correct, since withHEAD
the error body is always suppressed.
– Michael - sqlbot
Mar 22 '16 at 1:55
Hi, thank you for the quick response. The bucket that i'm trying access is, indeed, public. Not sure why it is complaining about a signed request then. It fails with similar error on my own bucket as well without the --no-sign-request option.
– MojoJojo
Mar 22 '16 at 2:01
You do have an IAM role on this instance, right? It sounds as if that role may be restricting things, perhaps in unexpected ways.
– Michael - sqlbot
Mar 22 '16 at 2:14
2
2
It looks like you're (maybe implicitly) using the instance's IAM role to make the request (that would explain
x-amz-security-token
-- temporary credentials from the role) and your role denies access to S3... or the bucket (not yours, I take it?) doesn't allow access with credentials -- though if it's public, that's strange. As always, make sure your system clock is correct, since with HEAD
the error body is always suppressed.– Michael - sqlbot
Mar 22 '16 at 1:55
It looks like you're (maybe implicitly) using the instance's IAM role to make the request (that would explain
x-amz-security-token
-- temporary credentials from the role) and your role denies access to S3... or the bucket (not yours, I take it?) doesn't allow access with credentials -- though if it's public, that's strange. As always, make sure your system clock is correct, since with HEAD
the error body is always suppressed.– Michael - sqlbot
Mar 22 '16 at 1:55
Hi, thank you for the quick response. The bucket that i'm trying access is, indeed, public. Not sure why it is complaining about a signed request then. It fails with similar error on my own bucket as well without the --no-sign-request option.
– MojoJojo
Mar 22 '16 at 2:01
Hi, thank you for the quick response. The bucket that i'm trying access is, indeed, public. Not sure why it is complaining about a signed request then. It fails with similar error on my own bucket as well without the --no-sign-request option.
– MojoJojo
Mar 22 '16 at 2:01
You do have an IAM role on this instance, right? It sounds as if that role may be restricting things, perhaps in unexpected ways.
– Michael - sqlbot
Mar 22 '16 at 2:14
You do have an IAM role on this instance, right? It sounds as if that role may be restricting things, perhaps in unexpected ways.
– Michael - sqlbot
Mar 22 '16 at 2:14
add a comment |
13 Answers
13
active
oldest
votes
I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in.
When I fixed the error in my template (it was wrong parameter map), the error disappeared
1
You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error.
– LeslieK
Jun 17 '17 at 17:07
2
Buckets actually are defined in a region.
– dmohr
Aug 23 '17 at 17:07
Passing the bucket's region as parameter worked for me.
– Giovane
Jul 12 '18 at 17:20
I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. in this answer. Can't say it's the same fix here, but maybe that's a hint? @LeslieK
– init_js
Nov 27 '18 at 6:44
I have checked to know ifus-west-2a
would be different fromus-west-2b
and it turns out that it works either way. It does not contradict with your answer but it adds to it. Thanks.
– Yevgeniy Afanasyev
Nov 28 '18 at 7:12
add a comment |
I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden
for my aws cli copy command aws s3 cp s3://bucket/file file
. I was using a IAM role which had full S3 access using an Inline Policy
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
If I give it the full S3 access from the Managed Policies
instead, then the command works. I think this must be a bug from Amazon, because the policies in both cases were exactly the same.
Btw, I was trying to use goofys to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but withResource: "example"
set (instead of*
), and that caused the inability to create files there (similar issue). I just changed it to themanaged policy
ofAmazonS3FullAccess
– shadi
Jul 2 '17 at 13:31
5
This is a bad answer - you should never allow policies that allow access to everything
– Marco de Abreu
May 7 '18 at 13:44
It's a workaround as long as the original bug is not fixed
– shadi
May 7 '18 at 15:07
add a comment |
I've had this issue, adding --recursive
to the command will help.
At this point it doesn't quite make sense as you (like me) are only trying to copy a single file down, but it does the trick!
6
It looked like success but the local downloaded "file" was in fact an empty directory
– ozma
Feb 16 '18 at 19:29
add a comment |
One of the reasons for this could be if you try accessing buckets of a region which requires V4-Signing. Try explicitly providing the region, as --region cn-north-1
add a comment |
in my case the problem was the Resource
statement in the user access policy.
First we had "Resource": "arn:aws:s3:::BUCKET_NAME"
,
but in order to have access to objects within a bucket you need a /*
at the end:
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
add a comment |
Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD
operation requires the ListBucket
permission.
I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.
add a comment |
In my case, i got this error trying to get an object on an S3 bucket folder. But in that folder my object was not here (i put the wrong folder), so S3 send this message. Hope it could help you too.
add a comment |
I was getting this error message due to my EC2 instance's clock being out of sync.
I was able to fix on Ubuntu using this:
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
Odd enough, but this fixed my issue too.
– Max Prokopov
Jan 30 at 7:37
add a comment |
I got this error with a mis-configured test event. I changed the source buckets ARN but forgot to edit the default S3 bucket name.
I.e. make sure that in the bucket section of the test event both the ARN and bucket name are set correctly:
"bucket": {
"arn": "arn:aws:s3:::your_bucket_name",
"name": "your_bucket_name",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
add a comment |
I was getting a 403 on HEAD requests while the GET requests were working. It turned out to be the CORS config in s3 permissions. I had to add HEAD
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
add a comment |
I also experienced that behaviour. In my case I've found that if the IAM policy doesn't have access to read the object (s3:GetObject
), the same error is raised.
I agree with you that the error raised from aws console & cli is not really well explained and may cause confusion.
add a comment |
I have also experienced this scenario.
I have a bucket with policy that uses AWS4-HMAC-SHA256. Turns out my awscli is not updated to the latest version. Mine was aws-cli/1.10.8. Upgrading it have solved the problem.
pip install awscli --upgrade --user
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
add a comment |
You are missing a HeadBucket permission.
afaik this doesn't solve the problem. and it also wasn't needed in my case to fix the problem.
– trudolf
May 9 '18 at 4:25
3
there is no HeadBucket permission. The HEAD operation requires the ListBucket permission.
– andrew lorien
May 9 '18 at 7:26
@andrewlorien If you post this as an answer, I will +1 you. This is what I was missing! (Wish error messages mentioned the permission.. would make it sooo much easier to create minimal-access policies by trial and error!)
– Tim Malone
Jun 5 '18 at 2:16
It looks like there is a HeadBucket operation: docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html and if you go to the policy simulator, it also shows a HeadBucket permission: policysim.aws.amazon.com
– Efren
Jul 11 '18 at 3:05
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f36144757%2faws-cli-s3-a-client-error-403-occurred-when-calling-the-headobject-operation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
13 Answers
13
active
oldest
votes
13 Answers
13
active
oldest
votes
active
oldest
votes
active
oldest
votes
I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in.
When I fixed the error in my template (it was wrong parameter map), the error disappeared
1
You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error.
– LeslieK
Jun 17 '17 at 17:07
2
Buckets actually are defined in a region.
– dmohr
Aug 23 '17 at 17:07
Passing the bucket's region as parameter worked for me.
– Giovane
Jul 12 '18 at 17:20
I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. in this answer. Can't say it's the same fix here, but maybe that's a hint? @LeslieK
– init_js
Nov 27 '18 at 6:44
I have checked to know ifus-west-2a
would be different fromus-west-2b
and it turns out that it works either way. It does not contradict with your answer but it adds to it. Thanks.
– Yevgeniy Afanasyev
Nov 28 '18 at 7:12
add a comment |
I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in.
When I fixed the error in my template (it was wrong parameter map), the error disappeared
1
You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error.
– LeslieK
Jun 17 '17 at 17:07
2
Buckets actually are defined in a region.
– dmohr
Aug 23 '17 at 17:07
Passing the bucket's region as parameter worked for me.
– Giovane
Jul 12 '18 at 17:20
I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. in this answer. Can't say it's the same fix here, but maybe that's a hint? @LeslieK
– init_js
Nov 27 '18 at 6:44
I have checked to know ifus-west-2a
would be different fromus-west-2b
and it turns out that it works either way. It does not contradict with your answer but it adds to it. Thanks.
– Yevgeniy Afanasyev
Nov 28 '18 at 7:12
add a comment |
I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in.
When I fixed the error in my template (it was wrong parameter map), the error disappeared
I figured it out. I had an error in my cloud formation template that was creating the EC2 instances. As a result, the EC2 instances that were trying to access the above code deploy buckets, were in different regions (not us-west-2). It seems like the access policies on the buckets (owned by Amazon) only allow access from the region they belong in.
When I fixed the error in my template (it was wrong parameter map), the error disappeared
edited Apr 12 '17 at 9:50
answered Oct 7 '16 at 21:27
MojoJojoMojoJojo
1,37711632
1,37711632
1
You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error.
– LeslieK
Jun 17 '17 at 17:07
2
Buckets actually are defined in a region.
– dmohr
Aug 23 '17 at 17:07
Passing the bucket's region as parameter worked for me.
– Giovane
Jul 12 '18 at 17:20
I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. in this answer. Can't say it's the same fix here, but maybe that's a hint? @LeslieK
– init_js
Nov 27 '18 at 6:44
I have checked to know ifus-west-2a
would be different fromus-west-2b
and it turns out that it works either way. It does not contradict with your answer but it adds to it. Thanks.
– Yevgeniy Afanasyev
Nov 28 '18 at 7:12
add a comment |
1
You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error.
– LeslieK
Jun 17 '17 at 17:07
2
Buckets actually are defined in a region.
– dmohr
Aug 23 '17 at 17:07
Passing the bucket's region as parameter worked for me.
– Giovane
Jul 12 '18 at 17:20
I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. in this answer. Can't say it's the same fix here, but maybe that's a hint? @LeslieK
– init_js
Nov 27 '18 at 6:44
I have checked to know ifus-west-2a
would be different fromus-west-2b
and it turns out that it works either way. It does not contradict with your answer but it adds to it. Thanks.
– Yevgeniy Afanasyev
Nov 28 '18 at 7:12
1
1
You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error.
– LeslieK
Jun 17 '17 at 17:07
You wrote: "access policies on the buckets (owned by Amazon) only allow access from the region they belong in." Buckets don't "belong to a region". They are Global. Wish I understood what fixed your error.
– LeslieK
Jun 17 '17 at 17:07
2
2
Buckets actually are defined in a region.
– dmohr
Aug 23 '17 at 17:07
Buckets actually are defined in a region.
– dmohr
Aug 23 '17 at 17:07
Passing the bucket's region as parameter worked for me.
– Giovane
Jul 12 '18 at 17:20
Passing the bucket's region as parameter worked for me.
– Giovane
Jul 12 '18 at 17:20
I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. in this answer. Can't say it's the same fix here, but maybe that's a hint? @LeslieK
– init_js
Nov 27 '18 at 6:44
I was getting something similar on boto3 in a cross-region request, and discovered a bug in my policy in the process. in this answer. Can't say it's the same fix here, but maybe that's a hint? @LeslieK
– init_js
Nov 27 '18 at 6:44
I have checked to know if
us-west-2a
would be different from us-west-2b
and it turns out that it works either way. It does not contradict with your answer but it adds to it. Thanks.– Yevgeniy Afanasyev
Nov 28 '18 at 7:12
I have checked to know if
us-west-2a
would be different from us-west-2b
and it turns out that it works either way. It does not contradict with your answer but it adds to it. Thanks.– Yevgeniy Afanasyev
Nov 28 '18 at 7:12
add a comment |
I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden
for my aws cli copy command aws s3 cp s3://bucket/file file
. I was using a IAM role which had full S3 access using an Inline Policy
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
If I give it the full S3 access from the Managed Policies
instead, then the command works. I think this must be a bug from Amazon, because the policies in both cases were exactly the same.
Btw, I was trying to use goofys to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but withResource: "example"
set (instead of*
), and that caused the inability to create files there (similar issue). I just changed it to themanaged policy
ofAmazonS3FullAccess
– shadi
Jul 2 '17 at 13:31
5
This is a bad answer - you should never allow policies that allow access to everything
– Marco de Abreu
May 7 '18 at 13:44
It's a workaround as long as the original bug is not fixed
– shadi
May 7 '18 at 15:07
add a comment |
I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden
for my aws cli copy command aws s3 cp s3://bucket/file file
. I was using a IAM role which had full S3 access using an Inline Policy
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
If I give it the full S3 access from the Managed Policies
instead, then the command works. I think this must be a bug from Amazon, because the policies in both cases were exactly the same.
Btw, I was trying to use goofys to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but withResource: "example"
set (instead of*
), and that caused the inability to create files there (similar issue). I just changed it to themanaged policy
ofAmazonS3FullAccess
– shadi
Jul 2 '17 at 13:31
5
This is a bad answer - you should never allow policies that allow access to everything
– Marco de Abreu
May 7 '18 at 13:44
It's a workaround as long as the original bug is not fixed
– shadi
May 7 '18 at 15:07
add a comment |
I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden
for my aws cli copy command aws s3 cp s3://bucket/file file
. I was using a IAM role which had full S3 access using an Inline Policy
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
If I give it the full S3 access from the Managed Policies
instead, then the command works. I think this must be a bug from Amazon, because the policies in both cases were exactly the same.
I was getting the error A client error (403) occurred when calling the HeadObject operation: Forbidden
for my aws cli copy command aws s3 cp s3://bucket/file file
. I was using a IAM role which had full S3 access using an Inline Policy
.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
If I give it the full S3 access from the Managed Policies
instead, then the command works. I think this must be a bug from Amazon, because the policies in both cases were exactly the same.
edited May 31 '16 at 5:55
answered May 30 '16 at 18:56
shadishadi
4,55031934
4,55031934
Btw, I was trying to use goofys to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but withResource: "example"
set (instead of*
), and that caused the inability to create files there (similar issue). I just changed it to themanaged policy
ofAmazonS3FullAccess
– shadi
Jul 2 '17 at 13:31
5
This is a bad answer - you should never allow policies that allow access to everything
– Marco de Abreu
May 7 '18 at 13:44
It's a workaround as long as the original bug is not fixed
– shadi
May 7 '18 at 15:07
add a comment |
Btw, I was trying to use goofys to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but withResource: "example"
set (instead of*
), and that caused the inability to create files there (similar issue). I just changed it to themanaged policy
ofAmazonS3FullAccess
– shadi
Jul 2 '17 at 13:31
5
This is a bad answer - you should never allow policies that allow access to everything
– Marco de Abreu
May 7 '18 at 13:44
It's a workaround as long as the original bug is not fixed
– shadi
May 7 '18 at 15:07
Btw, I was trying to use goofys to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but with
Resource: "example"
set (instead of *
), and that caused the inability to create files there (similar issue). I just changed it to the managed policy
of AmazonS3FullAccess
– shadi
Jul 2 '17 at 13:31
Btw, I was trying to use goofys to mount an s3 bucket in my ubuntu server filesystem via an IAM user attached to a similar policy to the above but with
Resource: "example"
set (instead of *
), and that caused the inability to create files there (similar issue). I just changed it to the managed policy
of AmazonS3FullAccess
– shadi
Jul 2 '17 at 13:31
5
5
This is a bad answer - you should never allow policies that allow access to everything
– Marco de Abreu
May 7 '18 at 13:44
This is a bad answer - you should never allow policies that allow access to everything
– Marco de Abreu
May 7 '18 at 13:44
It's a workaround as long as the original bug is not fixed
– shadi
May 7 '18 at 15:07
It's a workaround as long as the original bug is not fixed
– shadi
May 7 '18 at 15:07
add a comment |
I've had this issue, adding --recursive
to the command will help.
At this point it doesn't quite make sense as you (like me) are only trying to copy a single file down, but it does the trick!
6
It looked like success but the local downloaded "file" was in fact an empty directory
– ozma
Feb 16 '18 at 19:29
add a comment |
I've had this issue, adding --recursive
to the command will help.
At this point it doesn't quite make sense as you (like me) are only trying to copy a single file down, but it does the trick!
6
It looked like success but the local downloaded "file" was in fact an empty directory
– ozma
Feb 16 '18 at 19:29
add a comment |
I've had this issue, adding --recursive
to the command will help.
At this point it doesn't quite make sense as you (like me) are only trying to copy a single file down, but it does the trick!
I've had this issue, adding --recursive
to the command will help.
At this point it doesn't quite make sense as you (like me) are only trying to copy a single file down, but it does the trick!
answered Oct 4 '17 at 7:19
Scott Bennett-McLeishScott Bennett-McLeish
4,93293644
4,93293644
6
It looked like success but the local downloaded "file" was in fact an empty directory
– ozma
Feb 16 '18 at 19:29
add a comment |
6
It looked like success but the local downloaded "file" was in fact an empty directory
– ozma
Feb 16 '18 at 19:29
6
6
It looked like success but the local downloaded "file" was in fact an empty directory
– ozma
Feb 16 '18 at 19:29
It looked like success but the local downloaded "file" was in fact an empty directory
– ozma
Feb 16 '18 at 19:29
add a comment |
One of the reasons for this could be if you try accessing buckets of a region which requires V4-Signing. Try explicitly providing the region, as --region cn-north-1
add a comment |
One of the reasons for this could be if you try accessing buckets of a region which requires V4-Signing. Try explicitly providing the region, as --region cn-north-1
add a comment |
One of the reasons for this could be if you try accessing buckets of a region which requires V4-Signing. Try explicitly providing the region, as --region cn-north-1
One of the reasons for this could be if you try accessing buckets of a region which requires V4-Signing. Try explicitly providing the region, as --region cn-north-1
edited May 25 '17 at 14:55
metame
1,45211020
1,45211020
answered May 25 '17 at 13:32
SaurabhSaurabh
676
676
add a comment |
add a comment |
in my case the problem was the Resource
statement in the user access policy.
First we had "Resource": "arn:aws:s3:::BUCKET_NAME"
,
but in order to have access to objects within a bucket you need a /*
at the end:
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
add a comment |
in my case the problem was the Resource
statement in the user access policy.
First we had "Resource": "arn:aws:s3:::BUCKET_NAME"
,
but in order to have access to objects within a bucket you need a /*
at the end:
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
add a comment |
in my case the problem was the Resource
statement in the user access policy.
First we had "Resource": "arn:aws:s3:::BUCKET_NAME"
,
but in order to have access to objects within a bucket you need a /*
at the end:
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
in my case the problem was the Resource
statement in the user access policy.
First we had "Resource": "arn:aws:s3:::BUCKET_NAME"
,
but in order to have access to objects within a bucket you need a /*
at the end:
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
answered May 9 '18 at 4:24
trudolftrudolf
70668
70668
add a comment |
add a comment |
Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD
operation requires the ListBucket
permission.
I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.
add a comment |
Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD
operation requires the ListBucket
permission.
I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.
add a comment |
Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD
operation requires the ListBucket
permission.
I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.
Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD
operation requires the ListBucket
permission.
I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.
answered Jun 6 '18 at 7:31
andrew lorienandrew lorien
642718
642718
add a comment |
add a comment |
In my case, i got this error trying to get an object on an S3 bucket folder. But in that folder my object was not here (i put the wrong folder), so S3 send this message. Hope it could help you too.
add a comment |
In my case, i got this error trying to get an object on an S3 bucket folder. But in that folder my object was not here (i put the wrong folder), so S3 send this message. Hope it could help you too.
add a comment |
In my case, i got this error trying to get an object on an S3 bucket folder. But in that folder my object was not here (i put the wrong folder), so S3 send this message. Hope it could help you too.
In my case, i got this error trying to get an object on an S3 bucket folder. But in that folder my object was not here (i put the wrong folder), so S3 send this message. Hope it could help you too.
answered Aug 10 '18 at 6:37
VinceVince
362
362
add a comment |
add a comment |
I was getting this error message due to my EC2 instance's clock being out of sync.
I was able to fix on Ubuntu using this:
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
Odd enough, but this fixed my issue too.
– Max Prokopov
Jan 30 at 7:37
add a comment |
I was getting this error message due to my EC2 instance's clock being out of sync.
I was able to fix on Ubuntu using this:
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
Odd enough, but this fixed my issue too.
– Max Prokopov
Jan 30 at 7:37
add a comment |
I was getting this error message due to my EC2 instance's clock being out of sync.
I was able to fix on Ubuntu using this:
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
I was getting this error message due to my EC2 instance's clock being out of sync.
I was able to fix on Ubuntu using this:
sudo ntpdate ntp.ubuntu.com
sudo apt-get install ntp
answered Sep 29 '18 at 17:17
TatsuTatsu
435
435
Odd enough, but this fixed my issue too.
– Max Prokopov
Jan 30 at 7:37
add a comment |
Odd enough, but this fixed my issue too.
– Max Prokopov
Jan 30 at 7:37
Odd enough, but this fixed my issue too.
– Max Prokopov
Jan 30 at 7:37
Odd enough, but this fixed my issue too.
– Max Prokopov
Jan 30 at 7:37
add a comment |
I got this error with a mis-configured test event. I changed the source buckets ARN but forgot to edit the default S3 bucket name.
I.e. make sure that in the bucket section of the test event both the ARN and bucket name are set correctly:
"bucket": {
"arn": "arn:aws:s3:::your_bucket_name",
"name": "your_bucket_name",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
add a comment |
I got this error with a mis-configured test event. I changed the source buckets ARN but forgot to edit the default S3 bucket name.
I.e. make sure that in the bucket section of the test event both the ARN and bucket name are set correctly:
"bucket": {
"arn": "arn:aws:s3:::your_bucket_name",
"name": "your_bucket_name",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
add a comment |
I got this error with a mis-configured test event. I changed the source buckets ARN but forgot to edit the default S3 bucket name.
I.e. make sure that in the bucket section of the test event both the ARN and bucket name are set correctly:
"bucket": {
"arn": "arn:aws:s3:::your_bucket_name",
"name": "your_bucket_name",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
I got this error with a mis-configured test event. I changed the source buckets ARN but forgot to edit the default S3 bucket name.
I.e. make sure that in the bucket section of the test event both the ARN and bucket name are set correctly:
"bucket": {
"arn": "arn:aws:s3:::your_bucket_name",
"name": "your_bucket_name",
"ownerIdentity": {
"principalId": "EXAMPLE"
}
answered Sep 22 '18 at 3:33
quaxquax
814
814
add a comment |
add a comment |
I was getting a 403 on HEAD requests while the GET requests were working. It turned out to be the CORS config in s3 permissions. I had to add HEAD
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
add a comment |
I was getting a 403 on HEAD requests while the GET requests were working. It turned out to be the CORS config in s3 permissions. I had to add HEAD
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
add a comment |
I was getting a 403 on HEAD requests while the GET requests were working. It turned out to be the CORS config in s3 permissions. I had to add HEAD
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I was getting a 403 on HEAD requests while the GET requests were working. It turned out to be the CORS config in s3 permissions. I had to add HEAD
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
answered Nov 12 '18 at 20:29
Ioannis TsiokosIoannis Tsiokos
4626
4626
add a comment |
add a comment |
I also experienced that behaviour. In my case I've found that if the IAM policy doesn't have access to read the object (s3:GetObject
), the same error is raised.
I agree with you that the error raised from aws console & cli is not really well explained and may cause confusion.
add a comment |
I also experienced that behaviour. In my case I've found that if the IAM policy doesn't have access to read the object (s3:GetObject
), the same error is raised.
I agree with you that the error raised from aws console & cli is not really well explained and may cause confusion.
add a comment |
I also experienced that behaviour. In my case I've found that if the IAM policy doesn't have access to read the object (s3:GetObject
), the same error is raised.
I agree with you that the error raised from aws console & cli is not really well explained and may cause confusion.
I also experienced that behaviour. In my case I've found that if the IAM policy doesn't have access to read the object (s3:GetObject
), the same error is raised.
I agree with you that the error raised from aws console & cli is not really well explained and may cause confusion.
answered Nov 13 '18 at 11:54
Adrian AntunezAdrian Antunez
340310
340310
add a comment |
add a comment |
I have also experienced this scenario.
I have a bucket with policy that uses AWS4-HMAC-SHA256. Turns out my awscli is not updated to the latest version. Mine was aws-cli/1.10.8. Upgrading it have solved the problem.
pip install awscli --upgrade --user
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
add a comment |
I have also experienced this scenario.
I have a bucket with policy that uses AWS4-HMAC-SHA256. Turns out my awscli is not updated to the latest version. Mine was aws-cli/1.10.8. Upgrading it have solved the problem.
pip install awscli --upgrade --user
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
add a comment |
I have also experienced this scenario.
I have a bucket with policy that uses AWS4-HMAC-SHA256. Turns out my awscli is not updated to the latest version. Mine was aws-cli/1.10.8. Upgrading it have solved the problem.
pip install awscli --upgrade --user
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
I have also experienced this scenario.
I have a bucket with policy that uses AWS4-HMAC-SHA256. Turns out my awscli is not updated to the latest version. Mine was aws-cli/1.10.8. Upgrading it have solved the problem.
pip install awscli --upgrade --user
https://docs.aws.amazon.com/cli/latest/userguide/installing.html
answered Nov 23 '18 at 4:30
Renzo SunicoRenzo Sunico
2324
2324
add a comment |
add a comment |
You are missing a HeadBucket permission.
afaik this doesn't solve the problem. and it also wasn't needed in my case to fix the problem.
– trudolf
May 9 '18 at 4:25
3
there is no HeadBucket permission. The HEAD operation requires the ListBucket permission.
– andrew lorien
May 9 '18 at 7:26
@andrewlorien If you post this as an answer, I will +1 you. This is what I was missing! (Wish error messages mentioned the permission.. would make it sooo much easier to create minimal-access policies by trial and error!)
– Tim Malone
Jun 5 '18 at 2:16
It looks like there is a HeadBucket operation: docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html and if you go to the policy simulator, it also shows a HeadBucket permission: policysim.aws.amazon.com
– Efren
Jul 11 '18 at 3:05
add a comment |
You are missing a HeadBucket permission.
afaik this doesn't solve the problem. and it also wasn't needed in my case to fix the problem.
– trudolf
May 9 '18 at 4:25
3
there is no HeadBucket permission. The HEAD operation requires the ListBucket permission.
– andrew lorien
May 9 '18 at 7:26
@andrewlorien If you post this as an answer, I will +1 you. This is what I was missing! (Wish error messages mentioned the permission.. would make it sooo much easier to create minimal-access policies by trial and error!)
– Tim Malone
Jun 5 '18 at 2:16
It looks like there is a HeadBucket operation: docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html and if you go to the policy simulator, it also shows a HeadBucket permission: policysim.aws.amazon.com
– Efren
Jul 11 '18 at 3:05
add a comment |
You are missing a HeadBucket permission.
You are missing a HeadBucket permission.
answered Apr 20 '18 at 12:43
cohadarcohadar
2,47821318
2,47821318
afaik this doesn't solve the problem. and it also wasn't needed in my case to fix the problem.
– trudolf
May 9 '18 at 4:25
3
there is no HeadBucket permission. The HEAD operation requires the ListBucket permission.
– andrew lorien
May 9 '18 at 7:26
@andrewlorien If you post this as an answer, I will +1 you. This is what I was missing! (Wish error messages mentioned the permission.. would make it sooo much easier to create minimal-access policies by trial and error!)
– Tim Malone
Jun 5 '18 at 2:16
It looks like there is a HeadBucket operation: docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html and if you go to the policy simulator, it also shows a HeadBucket permission: policysim.aws.amazon.com
– Efren
Jul 11 '18 at 3:05
add a comment |
afaik this doesn't solve the problem. and it also wasn't needed in my case to fix the problem.
– trudolf
May 9 '18 at 4:25
3
there is no HeadBucket permission. The HEAD operation requires the ListBucket permission.
– andrew lorien
May 9 '18 at 7:26
@andrewlorien If you post this as an answer, I will +1 you. This is what I was missing! (Wish error messages mentioned the permission.. would make it sooo much easier to create minimal-access policies by trial and error!)
– Tim Malone
Jun 5 '18 at 2:16
It looks like there is a HeadBucket operation: docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html and if you go to the policy simulator, it also shows a HeadBucket permission: policysim.aws.amazon.com
– Efren
Jul 11 '18 at 3:05
afaik this doesn't solve the problem. and it also wasn't needed in my case to fix the problem.
– trudolf
May 9 '18 at 4:25
afaik this doesn't solve the problem. and it also wasn't needed in my case to fix the problem.
– trudolf
May 9 '18 at 4:25
3
3
there is no HeadBucket permission. The HEAD operation requires the ListBucket permission.
– andrew lorien
May 9 '18 at 7:26
there is no HeadBucket permission. The HEAD operation requires the ListBucket permission.
– andrew lorien
May 9 '18 at 7:26
@andrewlorien If you post this as an answer, I will +1 you. This is what I was missing! (Wish error messages mentioned the permission.. would make it sooo much easier to create minimal-access policies by trial and error!)
– Tim Malone
Jun 5 '18 at 2:16
@andrewlorien If you post this as an answer, I will +1 you. This is what I was missing! (Wish error messages mentioned the permission.. would make it sooo much easier to create minimal-access policies by trial and error!)
– Tim Malone
Jun 5 '18 at 2:16
It looks like there is a HeadBucket operation: docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html and if you go to the policy simulator, it also shows a HeadBucket permission: policysim.aws.amazon.com
– Efren
Jul 11 '18 at 3:05
It looks like there is a HeadBucket operation: docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketHEAD.html and if you go to the policy simulator, it also shows a HeadBucket permission: policysim.aws.amazon.com
– Efren
Jul 11 '18 at 3:05
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f36144757%2faws-cli-s3-a-client-error-403-occurred-when-calling-the-headobject-operation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
It looks like you're (maybe implicitly) using the instance's IAM role to make the request (that would explain
x-amz-security-token
-- temporary credentials from the role) and your role denies access to S3... or the bucket (not yours, I take it?) doesn't allow access with credentials -- though if it's public, that's strange. As always, make sure your system clock is correct, since withHEAD
the error body is always suppressed.– Michael - sqlbot
Mar 22 '16 at 1:55
Hi, thank you for the quick response. The bucket that i'm trying access is, indeed, public. Not sure why it is complaining about a signed request then. It fails with similar error on my own bucket as well without the --no-sign-request option.
– MojoJojo
Mar 22 '16 at 2:01
You do have an IAM role on this instance, right? It sounds as if that role may be restricting things, perhaps in unexpected ways.
– Michael - sqlbot
Mar 22 '16 at 2:14