• Tuple Assignment

Introduction

Tuples are basically a data type in python . These tuples are an ordered collection of elements of different data types. Furthermore, we represent them by writing the elements inside the parenthesis separated by commas. We can also define tuples as lists that we cannot change. Therefore, we can call them immutable tuples. Moreover, we access elements by using the index starting from zero. We can create a tuple in various ways. Here, we will study tuple assignment which is a very useful feature in python.

In python, we can perform tuple assignment which is a quite useful feature. We can initialise or create a tuple in various ways. Besides tuple assignment is a special feature in python. We also call this feature unpacking of tuple.

The process of assigning values to a tuple is known as packing. While on the other hand, the unpacking or tuple assignment is the process that assigns the values on the right-hand side to the left-hand side variables. In unpacking, we basically extract the values of the tuple into a single variable.

Moreover, while performing tuple assignments we should keep in mind that the number of variables on the left-hand side and the number of values on the right-hand side should be equal. Or in other words, the number of variables on the left-hand side and the number of elements in the tuple should be equal. Let us look at a few examples of packing and unpacking.

tuple assignment

Tuple Packing (Creating Tuples)

We can create a tuple in various ways by using different types of elements. Since a tuple can contain all elements of the same data type as well as of mixed data types as well. Therefore, we have multiple ways of creating tuples. Let us look at few examples of creating tuples in python which we consider as packing.

Example 1: Tuple with integers as elements

Example 2: Tuple with mixed data type

Example 3: Tuple with a tuple as an element

Example 4: Tuple with a list as an element

If there is only a single element in a tuple we should end it with a comma. Since writing, just the element inside the parenthesis will be considered as an integer.

For example,

Correct way of defining a tuple with single element is as follows:

Moreover, if you write any sequence separated by commas, python considers it as a tuple.

Browse more Topics Under Tuples and its Functions

  • Immutable Tuples
  • Creating Tuples
  • Initialising and Accessing Elements in a Tuple
  • Tuple Slicing
  • Tuple Indexing
  • Tuple Functions

Tuple Assignment (Unpacking)

Unpacking or tuple assignment is the process that assigns the values on the right-hand side to the left-hand side variables. In unpacking, we basically extract the values of the tuple into a single variable.

Frequently Asked Questions (FAQs)

Q1. State true or false:

Inserting elements in a tuple is unpacking.

Q2. What is the other name for tuple assignment?

A2. Unpacking

Q3. In unpacking what is the important condition?

A3. The number of variables on the left-hand side and the number of elements in the tuple should be equal.

Q4. Which error displays when the above condition fails?

A4. ValueError: not enough values to unpack

Customize your course in 30 seconds

Which class are you in.

tutor

  • Initialising and Accessing Elements in Tuple

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Python Land

Python Tuple: How to Create, Use, and Convert

A Python tuple is one of Python’s three built-in sequence data types , the others being lists and range objects. A Python tuple shares a lot of properties with the more commonly known Python list :

  • It can hold multiple values in a single variable
  • It’s ordered: the order of items is preserved
  • A tuple can have duplicate values
  • It’s indexed: you can access items numerically
  • A tuple can have an arbitrary length

But there are significant differences:

  • A tuple is immutable; it can not be changed once you have defined it.
  • A tuple is defined using optional parentheses () instead of square brackets []
  • Since a tuple is immutable, it can be hashed, and thus it can act as the key in a dictionary

Table of Contents

  • 1 Creating a Python tuple
  • 2 Multiple assignment using a Python tuple
  • 3 Indexed access
  • 4 Append to a Python Tuple
  • 5 Get tuple length
  • 6 Python Tuple vs List
  • 7 Python Tuple vs Set
  • 8 Converting Python tuples

Creating a Python tuple

We create tuples from individual values using optional parentheses (round brackets) like this:

Like everything in Python, tuples are objects and have a class that defines them. We can also create a tuple by using the tuple() constructor from that class. It allows any Python iterable type as an argument. In the following example, we create a tuple from a list:

Now you also know how to convert a Python list to a tuple!

Which method is best?

It’s not always easy for Python to infer if you’re using regular parentheses or if you’re trying to create a tuple. To demonstrate, let’s define a tuple holding only one item:

Python sees the number one, surrounded by useless parentheses on the first try, so Python strips down the expression to the number 1. However, we added a comma in the second try, explicitly signaling to Python that we are creating a tuple with just one element.

A tuple with just one item is useless for most use cases, but it demonstrates how Python recognizes a tuple: because of the comma.

If we can use tuple() , why is there a second method as well? The other notation is more concise, but it also has its value because you can use it to unpack multiple lists into a tuple in this way concisely:

The leading * operator unpacks the lists into individual elements. It’s as if you would have typed them individually at that spot. This unpacking trick works for all iterable types if you were wondering!

Multiple assignment using a Python tuple

You’ve seen something called tuple unpacking in the previous topic. There’s another way to unpack a tuple, called multiple assignment. It’s something that you see used a lot, especially when returning data from a function, so it’s worth taking a look at this.

Multiple assignment works like this:

Like using the *, this type of unpacking works for all iterable types in Python, including lists and strings.

As I explained in the Python trick on returning multiple values from a Python function, unpacking tuples works great in conjunction with a function that returns multiple values. It’s a neat way of returning more than one value without having to resort to data classes or dictionaries :

Indexed access

We can access a tuple using index numbers like [0] and [1] :

Append to a Python Tuple

Because a tuple is immutable, you can not append data to a tuple after creating it . For the same reason, you can’t remove data from a tuple either. You can, of course, create a new tuple from the old one and append the extra item(s) to it this way:

What we did was unpack t1 , create a new tuple with the unpacked values and two different strings and assign the result to t again.

Get tuple length

The len() function works on Python tuples just like it works on all other iterable types like lists and strings:

Python Tuple vs List

The most significant difference between a Python tuple and a Python list is that a List is mutable, while a tuple is not. After defining a tuple, you can not add or remove values. In contrast, a list allows you to add or remove values at will. This property can be an advantage; you can see it as write protection. If a piece of data is not meant to change, using a tuple can prevent errors. After all, six months from now, you might have forgotten that you should not change the data. Using a tuple prevents mistakes.

Another advantage is that tuples are faster, or at least that is what people say. I have not seen proof, but it makes sense. Since it’s an immutable data type, a tuple’s internal implementation can be simpler than lists. After all, they don’t need ways to grow larger or insert elements at random positions, which usually is implemented as a linked list . From what I understand, a tuple uses a simple array-like structure in the CPython implementation.

Python Tuple vs Set

The most significant difference between tuples and Python sets is that a tuple can have duplicates while a set can’t. The entire purpose of a set is its inability to contain duplicates. It’s an excellent tool for deduplicating your data.

Converting Python tuples

Convert tuple to list.

Python lists are mutable, while tuples are not. If you need to, you can convert a tuple to a list with one of the following methods.

The cleanest and most readable way is to use the list() constructor:

A more concise but less readable method is to use unpacking. This unpacking can sometimes come in handy because it allows you to unpack multiple tuples into one list or add some extra values otherwise:

Convert tuple to set

Analogous to the conversion to a list, we can use set() to convert a tuple to a set:

Here, too, we can use unpacking:

Convert tuple to string

Like most objects in Python, a tuple has a so-called dunder method, called __str__ , which converts the tuple into a string. When you want to print a tuple, you don’t need to do so explicitly. Python’s print function will call this method on any object that is not a string. In other cases, you can use the str() constructor to get the string representation of a tuple:

Get certified with our courses

Learn Python properly through small, easy-to-digest lessons, progress tracking, quizzes to test your knowledge, and practice sessions. Each course will earn you a downloadable course certificate.

The Python Course for Beginners

Related articles

  • Python Set: The Why And How With Example Code
  • Python List: How To Create, Sort, Append, Remove, And More
  • Convert a String to Title Case Using Python
  • Python YAML: How to Load, Read, and Write YAML

python tuple variable assignment

  • Table of Contents
  • Course Home
  • Assignments
  • Peer Instruction (Instructor)
  • Peer Instruction (Student)
  • Change Course
  • Instructor's Page
  • Progress Page
  • Edit Profile
  • Change Password
  • Scratch ActiveCode
  • Scratch Activecode
  • Instructors Guide
  • About Runestone
  • Report A Problem
  • 13.1 Introduction
  • 13.2 Tuple Packing
  • 13.3 Tuple Assignment with Unpacking
  • 13.4 Tuples as Return Values
  • 13.5 Unpacking Tuples as Arguments to Function Calls
  • 13.6 Glossary
  • 13.7 Exercises
  • 13.8 Chapter Assessment
  • 13.2. Tuple Packing" data-toggle="tooltip">
  • 13.4. Tuples as Return Values' data-toggle="tooltip" >

13.3. Tuple Assignment with Unpacking ¶

Python has a very powerful tuple assignment feature that allows a tuple of variable names on the left of an assignment statement to be assigned values from a tuple on the right of the assignment. Another way to think of this is that the tuple of values is unpacked into the variable names.

This does the equivalent of seven assignment statements, all on one easy line.

Naturally, the number of variables on the left and the number of values on the right have to be the same.

Unpacking into multiple variable names also works with lists, or any other sequence type, as long as there is exactly one value for each variable. For example, you can write x, y = [3, 4] .

13.3.1. Swapping Values between Variables ¶

This feature is used to enable swapping the values of two variables. With conventional assignment statements, we have to use a temporary variable. For example, to swap a and b :

Tuple assignment solves this problem neatly:

The left side is a tuple of variables; the right side is a tuple of values. Each value is assigned to its respective variable. All the expressions on the right side are evaluated before any of the assignments. This feature makes tuple assignment quite versatile.

13.3.2. Unpacking Into Iterator Variables ¶

Multiple assignment with unpacking is particularly useful when you iterate through a list of tuples. You can unpack each tuple into several loop variables. For example:

On the first iteration the tuple ('Paul', 'Resnick') is unpacked into the two variables first_name and last_name . One the second iteration, the next tuple is unpacked into those same loop variables.

13.3.3. The Pythonic Way to Enumerate Items in a Sequence ¶

When we first introduced the for loop, we provided an example of how to iterate through the indexes of a sequence, and thus enumerate the items and their positions in the sequence.

We are now prepared to understand a more pythonic approach to enumerating items in a sequence. Python provides a built-in function enumerate . It takes a sequence as input and returns a sequence of tuples. In each tuple, the first element is an integer and the second is an item from the original sequence. (It actually produces an “iterable” rather than a list, but we can use it in a for loop as the sequence to iterate over.)

The pythonic way to consume the results of enumerate, however, is to unpack the tuples while iterating through them, so that the code is easier to understand.

Check your Understanding

Consider the following alternative way to swap the values of variables x and y. What’s wrong with it?

  • You can't use different variable names on the left and right side of an assignment statement.
  • Sure you can; you can use any variable on the right-hand side that already has a value.
  • At the end, x still has it's original value instead of y's original value.
  • Once you assign x's value to y, y's original value is gone.
  • Actually, it works just fine!

With only one line of code, assign the variables water , fire , electric , and grass to the values “Squirtle”, “Charmander”, “Pikachu”, and “Bulbasaur”

With only one line of code, assign four variables, v1 , v2 , v3 , and v4 , to the following four values: 1, 2, 3, 4.

If you remember, the .items() dictionary method produces a sequence of tuples. Keeping this in mind, we have provided you a dictionary called pokemon . For every key value pair, append the key to the list p_names , and append the value to the list p_number . Do not use the .keys() or .values() methods.

The .items() method produces a sequence of key-value pair tuples. With this in mind, write code to create a list of keys from the dictionary track_medal_counts and assign the list to the variable name track_events . Do NOT use the .keys() method.

Digital Design Journal

Tuple Assignment Python [With Examples]

Tuple assignment is a feature that allows you to assign multiple variables simultaneously by unpacking the values from a tuple (or other iterable) into those variables.

Tuple assignment is a concise and powerful way to assign values to multiple variables in a single line of code.

Here’s how it works:

In this example, the values from the my_tuple tuple are unpacked and assigned to the variables a , b , and c in the same order as they appear in the tuple.

Tuple assignment is not limited to tuples; it can also work with other iterable types like lists:

Tuple assignment can be used to swap the values of two variables without needing a temporary variable:

Tuple assignment is a versatile feature in Python and is often used when you want to work with multiple values at once, making your code more readable and concise.

Tuple Assignment Python Example

Here are some examples of tuple assignment in Python:

Example 1: Basic Tuple Assignment

Example 2: Multiple Variables Assigned at Once

Example 3: Swapping Values

Example 4: Unpacking a Tuple Inside a Loop

Example 5: Ignoring Unwanted Values

These examples demonstrate various uses of tuple assignment in Python, from basic variable assignment to more advanced scenarios like swapping values or ignoring unwanted elements in the tuple. Tuple assignment is a powerful tool for working with structured data in Python.

  • Python Tuple Vs List Performance
  • Subprocess Python Stdout
  • Python Subprocess Stderr
  • Python Asyncio Subprocess [Asynchronous Subprocesses]
  • Subprocess.popen And Subprocess.run
  • Python Subprocess.popen
  •  Difference Between Subprocess Popen And Call
  • 5 Tuple Methods in Python [Explained]
  • Python List to Tuple
  • Python Tuple Append
  • Python Unpack Tuple Into Arguments
  • Python Concatenate Tuples

Aniket Singh

Aniket Singh holds a B.Tech in Computer Science & Engineering from Oriental University. He is a skilled programmer with a strong coding background, having hands-on experience in developing advanced projects, particularly in Python and the Django framework. Aniket has worked on various real-world industry projects and has a solid command of Python, Django, REST API, PostgreSQL, as well as proficiency in C and C++. He is eager to collaborate with experienced professionals to further enhance his skills.

Leave a Comment Cancel reply

Unpacking in Python: Beyond Parallel Assignment

python tuple variable assignment

  • Introduction

Unpacking in Python refers to an operation that consists of assigning an iterable of values to a tuple (or list ) of variables in a single assignment statement. As a complement, the term packing can be used when we collect several values in a single variable using the iterable unpacking operator, * .

Historically, Python developers have generically referred to this kind of operation as tuple unpacking . However, since this Python feature has turned out to be quite useful and popular, it's been generalized to all kinds of iterables. Nowadays, a more modern and accurate term would be iterable unpacking .

In this tutorial, we'll learn what iterable unpacking is and how we can take advantage of this Python feature to make our code more readable, maintainable, and pythonic.

Additionally, we'll also cover some practical examples of how to use the iterable unpacking feature in the context of assignments operations, for loops, function definitions, and function calls.

  • Packing and Unpacking in Python

Python allows a tuple (or list ) of variables to appear on the left side of an assignment operation. Each variable in the tuple can receive one value (or more, if we use the * operator) from an iterable on the right side of the assignment.

For historical reasons, Python developers used to call this tuple unpacking . However, since this feature has been generalized to all kind of iterable, a more accurate term would be iterable unpacking and that's what we'll call it in this tutorial.

Unpacking operations have been quite popular among Python developers because they can make our code more readable, and elegant. Let's take a closer look to unpacking in Python and see how this feature can improve our code.

  • Unpacking Tuples

In Python, we can put a tuple of variables on the left side of an assignment operator ( = ) and a tuple of values on the right side. The values on the right will be automatically assigned to the variables on the left according to their position in the tuple . This is commonly known as tuple unpacking in Python. Check out the following example:

When we put tuples on both sides of an assignment operator, a tuple unpacking operation takes place. The values on the right are assigned to the variables on the left according to their relative position in each tuple . As you can see in the above example, a will be 1 , b will be 2 , and c will be 3 .

To create a tuple object, we don't need to use a pair of parentheses () as delimiters. This also works for tuple unpacking, so the following syntaxes are equivalent:

Since all these variations are valid Python syntax, we can use any of them, depending on the situation. Arguably, the last syntax is more commonly used when it comes to unpacking in Python.

When we are unpacking values into variables using tuple unpacking, the number of variables on the left side tuple must exactly match the number of values on the right side tuple . Otherwise, we'll get a ValueError .

For example, in the following code, we use two variables on the left and three values on the right. This will raise a ValueError telling us that there are too many values to unpack:

Note: The only exception to this is when we use the * operator to pack several values in one variable as we'll see later on.

On the other hand, if we use more variables than values, then we'll get a ValueError but this time the message says that there are not enough values to unpack:

If we use a different number of variables and values in a tuple unpacking operation, then we'll get a ValueError . That's because Python needs to unambiguously know what value goes into what variable, so it can do the assignment accordingly.

  • Unpacking Iterables

The tuple unpacking feature got so popular among Python developers that the syntax was extended to work with any iterable object. The only requirement is that the iterable yields exactly one item per variable in the receiving tuple (or list ).

Check out the following examples of how iterable unpacking works in Python:

When it comes to unpacking in Python, we can use any iterable on the right side of the assignment operator. The left side can be filled with a tuple or with a list of variables. Check out the following example in which we use a tuple on the right side of the assignment statement:

It works the same way if we use the range() iterator:

Even though this is a valid Python syntax, it's not commonly used in real code and maybe a little bit confusing for beginner Python developers.

Finally, we can also use set objects in unpacking operations. However, since sets are unordered collection, the order of the assignments can be sort of incoherent and can lead to subtle bugs. Check out the following example:

If we use sets in unpacking operations, then the final order of the assignments can be quite different from what we want and expect. So, it's best to avoid using sets in unpacking operations unless the order of assignment isn't important to our code.

  • Packing With the * Operator

The * operator is known, in this context, as the tuple (or iterable) unpacking operator . It extends the unpacking functionality to allow us to collect or pack multiple values in a single variable. In the following example, we pack a tuple of values into a single variable by using the * operator:

For this code to work, the left side of the assignment must be a tuple (or a list ). That's why we use a trailing comma. This tuple can contain as many variables as we need. However, it can only contain one starred expression .

We can form a stared expression using the unpacking operator, * , along with a valid Python identifier, just like the *a in the above code. The rest of the variables in the left side tuple are called mandatory variables because they must be filled with concrete values, otherwise, we'll get an error. Here's how this works in practice.

Packing the trailing values in b :

Packing the starting values in a :

Packing one value in a because b and c are mandatory:

Packing no values in a ( a defaults to [] ) because b , c , and d are mandatory:

Supplying no value for a mandatory variable ( e ), so an error occurs:

Packing values in a variable with the * operator can be handy when we need to collect the elements of a generator in a single variable without using the list() function. In the following examples, we use the * operator to pack the elements of a generator expression and a range object to a individual variable:

In these examples, the * operator packs the elements in gen , and ran into g and r respectively. With his syntax, we avoid the need of calling list() to create a list of values from a range object, a generator expression, or a generator function.

Notice that we can't use the unpacking operator, * , to pack multiple values into one variable without adding a trailing comma to the variable on the left side of the assignment. So, the following code won't work:

If we try to use the * operator to pack several values into a single variable, then we need to use the singleton tuple syntax. For example, to make the above example works, we just need to add a comma after the variable r , like in *r, = range(10) .

  • Using Packing and Unpacking in Practice

Packing and unpacking operations can be quite useful in practice. They can make your code clear, readable, and pythonic. Let's take a look at some common use-cases of packing and unpacking in Python.

  • Assigning in Parallel

One of the most common use-cases of unpacking in Python is what we can call parallel assignment . Parallel assignment allows you to assign the values in an iterable to a tuple (or list ) of variables in a single and elegant statement.

For example, let's suppose we have a database about the employees in our company and we need to assign each item in the list to a descriptive variable. If we ignore how iterable unpacking works in Python, we can get ourself writing code like this:

Even though this code works, the index handling can be clumsy, hard to type, and confusing. A cleaner, more readable, and pythonic solution can be coded as follows:

Using unpacking in Python, we can solve the problem of the previous example with a single, straightforward, and elegant statement. This tiny change would make our code easier to read and understand for newcomers developers.

  • Swapping Values Between Variables

Another elegant application of unpacking in Python is swapping values between variables without using a temporary or auxiliary variable. For example, let's suppose we need to swap the values of two variables a and b . To do this, we can stick to the traditional solution and use a temporary variable to store the value to be swapped as follows:

This procedure takes three steps and a new temporary variable. If we use unpacking in Python, then we can achieve the same result in a single and concise step:

In statement a, b = b, a , we're reassigning a to b and b to a in one line of code. This is a lot more readable and straightforward. Also, notice that with this technique, there is no need for a new temporary variable.

  • Collecting Multiple Values With *

When we're working with some algorithms, there may be situations in which we need to split the values of an iterable or a sequence in chunks of values for further processing. The following example shows how to uses a list and slicing operations to do so:

Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!

Even though this code works as we expect, dealing with indices and slices can be a little bit annoying, difficult to read, and confusing for beginners. It has also the drawback of making the code rigid and difficult to maintain. In this situation, the iterable unpacking operator, * , and its ability to pack several values in a single variable can be a great tool. Check out this refactoring of the above code:

The line first, *body, last = seq makes the magic here. The iterable unpacking operator, * , collects the elements in the middle of seq in body . This makes our code more readable, maintainable, and flexible. You may be thinking, why more flexible? Well, suppose that seq changes its length in the road and you still need to collect the middle elements in body . In this case, since we're using unpacking in Python, no changes are needed for our code to work. Check out this example:

If we were using sequence slicing instead of iterable unpacking in Python, then we would need to update our indices and slices to correctly catch the new values.

The use of the * operator to pack several values in a single variable can be applied in a variety of configurations, provided that Python can unambiguously determine what element (or elements) to assign to each variable. Take a look at the following examples:

We can move the * operator in the tuple (or list ) of variables to collect the values according to our needs. The only condition is that Python can determine to what variable assign each value.

It's important to note that we can't use more than one stared expression in the assignment If we do so, then we'll get a SyntaxError as follows:

If we use two or more * in an assignment expression, then we'll get a SyntaxError telling us that two-starred expression were found. This is that way because Python can't unambiguously determine what value (or values) we want to assign to each variable.

  • Dropping Unneeded Values With *

Another common use-case of the * operator is to use it with a dummy variable name to drop some useless or unneeded values. Check out the following example:

For a more insightful example of this use-case, suppose we're developing a script that needs to determine the Python version we're using. To do this, we can use the sys.version_info attribute . This attribute returns a tuple containing the five components of the version number: major , minor , micro , releaselevel , and serial . But we just need major , minor , and micro for our script to work, so we can drop the rest. Here's an example:

Now, we have three new variables with the information we need. The rest of the information is stored in the dummy variable _ , which can be ignored by our program. This can make clear to newcomer developers that we don't want to (or need to) use the information stored in _ cause this character has no apparent meaning.

Note: By default, the underscore character _ is used by the Python interpreter to store the resulting value of the statements we run in an interactive session. So, in this context, the use of this character to identify dummy variables can be ambiguous.

  • Returning Tuples in Functions

Python functions can return several values separated by commas. Since we can define tuple objects without using parentheses, this kind of operation can be interpreted as returning a tuple of values. If we code a function that returns multiple values, then we can perform iterable packing and unpacking operations with the returned values.

Check out the following example in which we define a function to calculate the square and cube of a given number:

If we define a function that returns comma-separated values, then we can do any packing or unpacking operation on these values.

  • Merging Iterables With the * Operator

Another interesting use-case for the unpacking operator, * , is the ability to merge several iterables into a final sequence. This functionality works for lists, tuples, and sets. Take a look at the following examples:

We can use the iterable unpacking operator, * , when defining sequences to unpack the elements of a subsequence (or iterable) into the final sequence. This will allow us to create sequences on the fly from other existing sequences without calling methods like append() , insert() , and so on.

The last two examples show that this is also a more readable and efficient way to concatenate iterables. Instead of writing list(my_set) + my_list + list(my_tuple) + list(range(1, 4)) + list(my_str) we just write [*my_set, *my_list, *my_tuple, *range(1, 4), *my_str] .

  • Unpacking Dictionaries With the ** Operator

In the context of unpacking in Python, the ** operator is called the dictionary unpacking operator . The use of this operator was extended by PEP 448 . Now, we can use it in function calls, in comprehensions and generator expressions, and in displays .

A basic use-case for the dictionary unpacking operator is to merge multiple dictionaries into one final dictionary with a single expression. Let's see how this works:

If we use the dictionary unpacking operator inside a dictionary display, then we can unpack dictionaries and combine them to create a final dictionary that includes the key-value pairs of the original dictionaries, just like we did in the above code.

An important point to note is that, if the dictionaries we're trying to merge have repeated or common keys, then the values of the right-most dictionary will override the values of the left-most dictionary. Here's an example:

Since the a key is present in both dictionaries, the value that prevail comes from vowels , which is the right-most dictionary. This happens because Python starts adding the key-value pairs from left to right. If, in the process, Python finds keys that already exit, then the interpreter updates that keys with the new value. That's why the value of the a key is lowercased in the above example.

  • Unpacking in For-Loops

We can also use iterable unpacking in the context of for loops. When we run a for loop, the loop assigns one item of its iterable to the target variable in every iteration. If the item to be assigned is an iterable, then we can use a tuple of target variables. The loop will unpack the iterable at hand into the tuple of target variables.

As an example, let's suppose we have a file containing data about the sales of a company as follows:

Product Price Sold Units
Pencil 0.25 1500
Notebook 1.30 550
Eraser 0.75 1000
... ... ...

From this table, we can build a list of two-elements tuples. Each tuple will contain the name of the product, the price, and the sold units. With this information, we want to calculate the income of each product. To do this, we can use a for loop like this:

This code works as expected. However, we're using indices to get access to individual elements of each tuple . This can be difficult to read and to understand by newcomer developers.

Let's take a look at an alternative implementation using unpacking in Python:

We're now using iterable unpacking in our for loop. This makes our code way more readable and maintainable because we're using descriptive names to identify the elements of each tuple . This tiny change will allow a newcomer developer to quickly understand the logic behind the code.

It's also possible to use the * operator in a for loop to pack several items in a single target variable:

In this for loop, we're catching the first element of each sequence in first . Then the * operator catches a list of values in its target variable rest .

Finally, the structure of the target variables must agree with the structure of the iterable. Otherwise, we'll get an error. Take a look at the following example:

In the first loop, the structure of the target variables, (a, b), c , agrees with the structure of the items in the iterable, ((1, 2), 2) . In this case, the loop works as expected. In contrast, the second loop uses a structure of target variables that don't agree with the structure of the items in the iterable, so the loop fails and raises a ValueError .

  • Packing and Unpacking in Functions

We can also use Python's packing and unpacking features when defining and calling functions. This is a quite useful and popular use-case of packing and unpacking in Python.

In this section, we'll cover the basics of how to use packing and unpacking in Python functions either in the function definition or in the function call.

Note: For a more insightful and detailed material on these topics, check out Variable-Length Arguments in Python with *args and **kwargs .

  • Defining Functions With * and **

We can use the * and ** operators in the signature of Python functions. This will allow us to call the function with a variable number of positional arguments ( * ) or with a variable number of keyword arguments, or both. Let's consider the following function:

The above function requires at least one argument called required . It can accept a variable number of positional and keyword arguments as well. In this case, the * operator collects or packs extra positional arguments in a tuple called args and the ** operator collects or packs extra keyword arguments in a dictionary called kwargs . Both, args and kwargs , are optional and automatically default to () and {} respectively.

Even though the names args and kwargs are widely used by the Python community, they're not a requirement for these techniques to work. The syntax just requires * or ** followed by a valid identifier. So, if you can give meaningful names to these arguments, then do it. That will certainly improve your code's readability.

  • Calling Functions With * and **

When calling functions, we can also benefit from the use of the * and ** operator to unpack collections of arguments into separate positional or keyword arguments respectively. This is the inverse of using * and ** in the signature of a function. In the signature, the operators mean collect or pack a variable number of arguments in one identifier. In the call, they mean unpack an iterable into several arguments.

Here's a basic example of how this works:

Here, the * operator unpacks sequences like ["Welcome", "to"] into positional arguments. Similarly, the ** operator unpacks dictionaries into arguments whose names match the keys of the unpacked dictionary.

We can also combine this technique and the one covered in the previous section to write quite flexible functions. Here's an example:

The use of the * and ** operators, when defining and calling Python functions, will give them extra capabilities and make them more flexible and powerful.

Iterable unpacking turns out to be a pretty useful and popular feature in Python. This feature allows us to unpack an iterable into several variables. On the other hand, packing consists of catching several values into one variable using the unpacking operator, * .

In this tutorial, we've learned how to use iterable unpacking in Python to write more readable, maintainable, and pythonic code.

With this knowledge, we are now able to use iterable unpacking in Python to solve common problems like parallel assignment and swapping values between variables. We're also able to use this Python feature in other structures like for loops, function calls, and function definitions.

You might also like...

  • Hidden Features of Python
  • Python Docstrings
  • Handling Unix Signals in Python
  • The Best Machine Learning Libraries in Python
  • Guide to Sending HTTP Requests in Python with urllib3

Improve your dev skills!

Get tutorials, guides, and dev jobs in your inbox.

No spam ever. Unsubscribe at any time. Read our Privacy Policy.

Leodanis is an industrial engineer who loves Python and software development. He is a self-taught Python programmer with 5+ years of experience building desktop applications with PyQt.

In this article

python tuple variable assignment

Monitor with Ping Bot

Reliable monitoring for your app, databases, infrastructure, and the vendors they rely on. Ping Bot is a powerful uptime and performance monitoring tool that helps notify you and resolve issues before they affect your customers.

OpenAI

Vendor Alerts with Ping Bot

Get detailed incident alerts about the status of your favorite vendors. Don't learn about downtime from your customers, be the first to know with Ping Bot.

Supabase

© 2013- 2024 Stack Abuse. All rights reserved.

Home » Python Basics » Python Unpacking Tuple

Python Unpacking Tuple

Summary : in this tutorial, you’ll learn how to unpack tuples in Python.

Reviewing Python tuples

Python defines a tuple using commas ( , ), not parentheses () . For example, the following defines a tuple with two elements:

Python uses the parentheses to make the tuple clearer:

Python also uses the parentheses to create an empty tuple:

In addition, you can use the tuple() constructor like this:

To define a tuple with only one element, you still need to use a comma. The following example illustrates how to define a tuple with one element:

It’s equivalent to the following:

Note that the following is an integer , not a tuple:

Unpacking a tuple

Unpacking a tuple means splitting the tuple’s elements into individual variables . For example:

The left side:

is a tuple of two variables x and y .

The right side is also a tuple of two integers 1 and 2 .

The expression assigns the tuple elements on the right side (1, 2) to each variable on the left side (x, y) based on the relative position of each element.

In the above example, x will take 1 and y will take 2 .

See another example:

The right side is a tuple of three integers 10 , 20 , and 30 . You can quickly check its type as follows:

In the above example, the x , y , and z variables will take the values 10 , 20 , and 30 respectively.

Using unpacking tuple to swap values of two variables

Traditionally, to swap the values of two variables, you would use a temporary variable like this:

In Python, you can use the unpacking tuple syntax to achieve the same result:

The following expression swaps the values of two variables, x and y.

In this expression, Python evaluates the right-hand side first and then assigns the variable from the left-hand side to the values from the right-hand side.

ValueError: too many values to unpack

The following example unpacks the elements of a tuple into variables. However, it’ll result in an error:

This error is because the right-hand side returns three values while the left-hand side only has two variables.

To fix this, you can add a _ variable:

The _ variable is a regular variable in Python. By convention, it’s called a dummy variable.

Typically, you use the dummy variable to unpack when you don’t care and use its value afterward.

Extended unpacking using the * operator

Sometimes, you don’t want to unpack every single item in a tuple. For example, you may want to unpack the first and second elements. In this case, you can use the * operator. For example:

In this example, Python assigns 192 to r , 210 to g . Also, Python packs the remaining elements 100 and 0.5 into a list and assigns it to the other variable.

Notice that you can only use the * operator once on the left-hand side of an unpacking assignment.

The following example results in error:

Using the * operator on the right hand side

Python allows you to use the * operator on the right-hand side. Suppose that you have two tuples:

The following example uses the * operator to unpack those tuples and merge them into a single tuple:

  • Python uses the commas ( , ) to define a tuple, not parentheses.
  • Unpacking tuples means assigning individual elements of a tuple to multiple variables.
  • Use the * operator to assign remaining elements of an unpacking assignment into a list and assign it to a variable.

Learn Python practically and Get Certified .

Popular Tutorials

Popular examples, reference materials, learn python interactively, python introduction.

  • Get Started With Python
  • Your First Python Program
  • Python Comments

Python Fundamentals

  • Python Variables and Literals
  • Python Type Conversion
  • Python Basic Input and Output
  • Python Operators

Python Flow Control

  • Python if...else Statement
  • Python for Loop
  • Python while Loop
  • Python break and continue
  • Python pass Statement

Python Data types

  • Python Numbers and Mathematics
  • Python List

Python Tuple

  • Python String
  • Python Dictionary
  • Python Functions
  • Python Function Arguments
  • Python Variable Scope
  • Python Global Keyword
  • Python Recursion
  • Python Modules
  • Python Package
  • Python Main function

Python Files

  • Python Directory and Files Management
  • Python CSV: Read and Write CSV files
  • Reading CSV files in Python
  • Writing CSV files in Python
  • Python Exception Handling
  • Python Exceptions
  • Python Custom Exceptions

Python Object & Class

  • Python Objects and Classes
  • Python Inheritance
  • Python Multiple Inheritance
  • Polymorphism in Python
  • Python Operator Overloading

Python Advanced Topics

  • List comprehension
  • Python Lambda/Anonymous Function
  • Python Iterators
  • Python Generators
  • Python Namespace and Scope
  • Python Closures
  • Python Decorators
  • Python @property decorator
  • Python RegEx

Python Date and Time

  • Python datetime
  • Python strftime()
  • Python strptime()
  • How to get current date and time in Python?
  • Python Get Current Time
  • Python timestamp to datetime and vice-versa
  • Python time Module
  • Python sleep()

Additional Topic

  • Precedence and Associativity of Operators in Python
  • Python Keywords and Identifiers
  • Python Asserts
  • Python Json
  • Python *args and **kwargs

Python Tutorials

Python tuple()

Python Tuple index()

Python Tuple count()

  • Python Lists Vs Tuples

Python del Statement

  • Python Dictionary items()

A tuple is a collection similar to a Python list . The primary difference is that we cannot modify a tuple once it is created.

  • Create a Python Tuple

We create a tuple by placing items inside parentheses () . For example,

More on Tuple Creation

We can also create a tuple using a tuple() constructor. For example,

Here are the different types of tuples we can create in Python.

Empty Tuple

Tuple of different data types

Tuple of mixed data types

Tuple Characteristics

Tuples are:

  • Ordered - They maintain the order of elements.
  • Immutable - They cannot be changed after creation.
  • Allow duplicates - They can contain duplicate values.
  • Access Tuple Items

Each item in a tuple is associated with a number, known as a index .

The index always starts from 0 , meaning the first item of a tuple is at index 0 , the second item is at index 1, and so on.

Index of Tuple Item

Access Items Using Index

We use index numbers to access tuple items. For example,

Access Tuple Items

Tuple Cannot be Modified

Python tuples are immutable (unchangeable). We cannot add, change, or delete items of a tuple.

If we try to modify a tuple, we will get an error. For example,

  • Python Tuple Length

We use the len() function to find the number of items present in a tuple. For example,

  • Iterate Through a Tuple

We use the for loop to iterate over the items of a tuple. For example,

More on Python Tuple

We use the in keyword to check if an item exists in the tuple. For example,

  • yellow is not present in colors , so, 'yellow' in colors evaluates to False
  • red is present in colors , so, 'red' in colors evaluates to True

Python Tuples are immutable - we cannot change the items of a tuple once created.

If we try to do so, we will get an error. For example,

We cannot delete individual items of a tuple. However, we can delete the tuple itself using the del statement. For example,

Here, we have deleted the animals tuple.

When we want to create a tuple with a single item, we might do the following:

But this would not create a tuple; instead, it would be considered a string .

To solve this, we need to include a trailing comma after the item. For example,

  • Python Tuple Methods

Table of Contents

  • Introduction

Before we wrap up, let’s put your knowledge of Python tuple to the test! Can you solve the following challenge?

Write a function to modify a tuple by adding an element at the end of it.

  • For inputs with tuple (1, 2, 3) and element 4 , the return value should be (1, 2, 3, 4) .
  • Hint: You need to first convert the tuple to another data type, such as a list.

Video: Python Lists and Tuples

Sorry about that.

Our premium learning platform, created with over a decade of experience and thousands of feedbacks .

Learn and improve your coding skills like never before.

  • Interactive Courses
  • Certificates
  • 2000+ Challenges

Related Tutorials

Python Library

Python Tutorial

Python's Assignment Operator: Write Robust Assignments

Python's Assignment Operator: Write Robust Assignments

Table of Contents

The Assignment Statement Syntax

The assignment operator, assignments and variables, other assignment syntax, initializing and updating variables, making multiple variables refer to the same object, updating lists through indices and slices, adding and updating dictionary keys, doing parallel assignments, unpacking iterables, providing default argument values, augmented mathematical assignment operators, augmented assignments for concatenation and repetition, augmented bitwise assignment operators, annotated assignment statements, assignment expressions with the walrus operator, managed attribute assignments, define or call a function, work with classes, import modules and objects, use a decorator, access the control variable in a for loop or a comprehension, use the as keyword, access the _ special variable in an interactive session, built-in objects, named constants.

Python’s assignment operators allow you to define assignment statements . This type of statement lets you create, initialize, and update variables throughout your code. Variables are a fundamental cornerstone in every piece of code, and assignment statements give you complete control over variable creation and mutation.

Learning about the Python assignment operator and its use for writing assignment statements will arm you with powerful tools for writing better and more robust Python code.

In this tutorial, you’ll:

  • Use Python’s assignment operator to write assignment statements
  • Take advantage of augmented assignments in Python
  • Explore assignment variants, like assignment expressions and managed attributes
  • Become aware of illegal and dangerous assignments in Python

You’ll dive deep into Python’s assignment statements. To get the most out of this tutorial, you should be comfortable with several basic topics, including variables , built-in data types , comprehensions , functions , and Python keywords . Before diving into some of the later sections, you should also be familiar with intermediate topics, such as object-oriented programming , constants , imports , type hints , properties , descriptors , and decorators .

Free Source Code: Click here to download the free assignment operator source code that you’ll use to write assignment statements that allow you to create, initialize, and update variables in your code.

Assignment Statements and the Assignment Operator

One of the most powerful programming language features is the ability to create, access, and mutate variables . In Python, a variable is a name that refers to a concrete value or object, allowing you to reuse that value or object throughout your code.

To create a new variable or to update the value of an existing one in Python, you’ll use an assignment statement . This statement has the following three components:

  • A left operand, which must be a variable
  • The assignment operator ( = )
  • A right operand, which can be a concrete value , an object , or an expression

Here’s how an assignment statement will generally look in Python:

Here, variable represents a generic Python variable, while expression represents any Python object that you can provide as a concrete value—also known as a literal —or an expression that evaluates to a value.

To execute an assignment statement like the above, Python runs the following steps:

  • Evaluate the right-hand expression to produce a concrete value or object . This value will live at a specific memory address in your computer.
  • Store the object’s memory address in the left-hand variable . This step creates a new variable if the current one doesn’t already exist or updates the value of an existing variable.

The second step shows that variables work differently in Python than in other programming languages. In Python, variables aren’t containers for objects. Python variables point to a value or object through its memory address. They store memory addresses rather than objects.

This behavior difference directly impacts how data moves around in Python, which is always by reference . In most cases, this difference is irrelevant in your day-to-day coding, but it’s still good to know.

The central component of an assignment statement is the assignment operator . This operator is represented by the = symbol, which separates two operands:

  • A value or an expression that evaluates to a concrete value

Operators are special symbols that perform mathematical , logical , and bitwise operations in a programming language. The objects (or object) on which an operator operates are called operands .

Unary operators, like the not Boolean operator, operate on a single object or operand, while binary operators act on two. That means the assignment operator is a binary operator.

Note: Like C , Python uses == for equality comparisons and = for assignments. Unlike C, Python doesn’t allow you to accidentally use the assignment operator ( = ) in an equality comparison.

Equality is a symmetrical relationship, and assignment is not. For example, the expression a == 42 is equivalent to 42 == a . In contrast, the statement a = 42 is correct and legal, while 42 = a isn’t allowed. You’ll learn more about illegal assignments later on.

The right-hand operand in an assignment statement can be any Python object, such as a number , list , string , dictionary , or even a user-defined object. It can also be an expression. In the end, expressions always evaluate to concrete objects, which is their return value.

Here are a few examples of assignments in Python:

The first two sample assignments in this code snippet use concrete values, also known as literals , to create and initialize number and greeting . The third example assigns the result of a math expression to the total variable, while the last example uses a Boolean expression.

Note: You can use the built-in id() function to inspect the memory address stored in a given variable.

Here’s a short example of how this function works:

The number in your output represents the memory address stored in number . Through this address, Python can access the content of number , which is the integer 42 in this example.

If you run this code on your computer, then you’ll get a different memory address because this value varies from execution to execution and computer to computer.

Unlike expressions, assignment statements don’t have a return value because their purpose is to make the association between the variable and its value. That’s why the Python interpreter doesn’t issue any output in the above examples.

Now that you know the basics of how to write an assignment statement, it’s time to tackle why you would want to use one.

The assignment statement is the explicit way for you to associate a name with an object in Python. You can use this statement for two main purposes:

  • Creating and initializing new variables
  • Updating the values of existing variables

When you use a variable name as the left operand in an assignment statement for the first time, you’re creating a new variable. At the same time, you’re initializing the variable to point to the value of the right operand.

On the other hand, when you use an existing variable in a new assignment, you’re updating or mutating the variable’s value. Strictly speaking, every new assignment will make the variable refer to a new value and stop referring to the old one. Python will garbage-collect all the values that are no longer referenced by any existing variable.

Assignment statements not only assign a value to a variable but also determine the data type of the variable at hand. This additional behavior is another important detail to consider in this kind of statement.

Because Python is a dynamically typed language, successive assignments to a given variable can change the variable’s data type. Changing the data type of a variable during a program’s execution is considered bad practice and highly discouraged. It can lead to subtle bugs that can be difficult to track down.

Unlike in math equations, in Python assignments, the left operand must be a variable rather than an expression or a value. For example, the following construct is illegal, and Python flags it as invalid syntax:

In this example, you have expressions on both sides of the = sign, and this isn’t allowed in Python code. The error message suggests that you may be confusing the equality operator with the assignment one, but that’s not the case. You’re really running an invalid assignment.

To correct this construct and convert it into a valid assignment, you’ll have to do something like the following:

In this code snippet, you first import the sqrt() function from the math module. Then you isolate the hypotenuse variable in the original equation by using the sqrt() function. Now your code works correctly.

Now you know what kind of syntax is invalid. But don’t get the idea that assignment statements are rigid and inflexible. In fact, they offer lots of room for customization, as you’ll learn next.

Python’s assignment statements are pretty flexible and versatile. You can write them in several ways, depending on your specific needs and preferences. Here’s a quick summary of the main ways to write assignments in Python:

Up to this point, you’ve mostly learned about the base assignment syntax in the above code snippet. In the following sections, you’ll learn about multiple, parallel, and augmented assignments. You’ll also learn about assignments with iterable unpacking.

Read on to see the assignment statements in action!

Assignment Statements in Action

You’ll find and use assignment statements everywhere in your Python code. They’re a fundamental part of the language, providing an explicit way to create, initialize, and mutate variables.

You can use assignment statements with plain names, like number or counter . You can also use assignments in more complicated scenarios, such as with:

  • Qualified attribute names , like user.name
  • Indices and slices of mutable sequences, like a_list[i] and a_list[i:j]
  • Dictionary keys , like a_dict[key]

This list isn’t exhaustive. However, it gives you some idea of how flexible these statements are. You can even assign multiple values to an equal number of variables in a single line, commonly known as parallel assignment . Additionally, you can simultaneously assign the values in an iterable to a comma-separated group of variables in what’s known as an iterable unpacking operation.

In the following sections, you’ll dive deeper into all these topics and a few other exciting things that you can do with assignment statements in Python.

The most elementary use case of an assignment statement is to create a new variable and initialize it using a particular value or expression:

All these statements create new variables, assigning them initial values or expressions. For an initial value, you should always use the most sensible and least surprising value that you can think of. For example, initializing a counter to something different from 0 may be confusing and unexpected because counters almost always start having counted no objects.

Updating a variable’s current value or state is another common use case of assignment statements. In Python, assigning a new value to an existing variable doesn’t modify the variable’s current value. Instead, it causes the variable to refer to a different value. The previous value will be garbage-collected if no other variable refers to it.

Consider the following examples:

These examples run two consecutive assignments on the same variable. The first one assigns the string "Hello, World!" to a new variable named greeting .

The second assignment updates the value of greeting by reassigning it the "Hi, Pythonistas!" string. In this example, the original value of greeting —the "Hello, World!" string— is lost and garbage-collected. From this point on, you can’t access the old "Hello, World!" string.

Even though running multiple assignments on the same variable during a program’s execution is common practice, you should use this feature with caution. Changing the value of a variable can make your code difficult to read, understand, and debug. To comprehend the code fully, you’ll have to remember all the places where the variable was changed and the sequential order of those changes.

Because assignments also define the data type of their target variables, it’s also possible for your code to accidentally change the type of a given variable at runtime. A change like this can lead to breaking errors, like AttributeError exceptions. Remember that strings don’t have the same methods and attributes as lists or dictionaries, for example.

In Python, you can make several variables reference the same object in a multiple-assignment line. This can be useful when you want to initialize several similar variables using the same initial value:

In this example, you chain two assignment operators in a single line. This way, your two variables refer to the same initial value of 0 . Note how both variables hold the same memory address, so they point to the same instance of 0 .

When it comes to integer variables, Python exhibits a curious behavior. It provides a numeric interval where multiple assignments behave the same as independent assignments. Consider the following examples:

To create n and m , you use independent assignments. Therefore, they should point to different instances of the number 42 . However, both variables hold the same object, which you confirm by comparing their corresponding memory addresses.

Now check what happens when you use a greater initial value:

Now n and m hold different memory addresses, which means they point to different instances of the integer number 300 . In contrast, when you use multiple assignments, both variables refer to the same object. This tiny difference can save you small bits of memory if you frequently initialize integer variables in your code.

The implicit behavior of making independent assignments point to the same integer number is actually an optimization called interning . It consists of globally caching the most commonly used integer values in day-to-day programming.

Under the hood, Python defines a numeric interval in which interning takes place. That’s the interning interval for integer numbers. You can determine this interval using a small script like the following:

This script helps you determine the interning interval by comparing integer numbers from -10 to 500 . If you run the script from your command line, then you’ll get an output like the following:

This output means that if you use a single number between -5 and 256 to initialize several variables in independent statements, then all these variables will point to the same object, which will help you save small bits of memory in your code.

In contrast, if you use a number that falls outside of the interning interval, then your variables will point to different objects instead. Each of these objects will occupy a different memory spot.

You can use the assignment operator to mutate the value stored at a given index in a Python list. The operator also works with list slices . The syntax to write these types of assignment statements is the following:

In the first construct, expression can return any Python object, including another list. In the second construct, expression must return a series of values as a list, tuple, or any other sequence. You’ll get a TypeError if expression returns a single value.

Note: When creating slice objects, you can use up to three arguments. These arguments are start , stop , and step . They define the number that starts the slice, the number at which the slicing must stop retrieving values, and the step between values.

Here’s an example of updating an individual value in a list:

In this example, you update the value at index 2 using an assignment statement. The original number at that index was 7 , and after the assignment, the number is 3 .

Note: Using indices and the assignment operator to update a value in a tuple or a character in a string isn’t possible because tuples and strings are immutable data types in Python.

Their immutability means that you can’t change their items in place :

You can’t use the assignment operator to change individual items in tuples or strings. These data types are immutable and don’t support item assignments.

It’s important to note that you can’t add new values to a list by using indices that don’t exist in the target list:

In this example, you try to add a new value to the end of numbers by using an index that doesn’t exist. This assignment isn’t allowed because there’s no way to guarantee that new indices will be consecutive. If you ever want to add a single value to the end of a list, then use the .append() method.

If you want to update several consecutive values in a list, then you can use slicing and an assignment statement:

In the first example, you update the letters between indices 1 and 3 without including the letter at 3 . The second example updates the letters from index 3 until the end of the list. Note that this slicing appends a new value to the list because the target slice is shorter than the assigned values.

Also note that the new values were provided through a tuple, which means that this type of assignment allows you to use other types of sequences to update your target list.

The third example updates a single value using a slice where both indices are equal. In this example, the assignment inserts a new item into your target list.

In the final example, you use a step of 2 to replace alternating letters with their lowercase counterparts. This slicing starts at index 1 and runs through the whole list, stepping by two items each time.

Updating the value of an existing key or adding new key-value pairs to a dictionary is another common use case of assignment statements. To do these operations, you can use the following syntax:

The first construct helps you update the current value of an existing key, while the second construct allows you to add a new key-value pair to the dictionary.

For example, to update an existing key, you can do something like this:

In this example, you update the current inventory of oranges in your store using an assignment. The left operand is the existing dictionary key, and the right operand is the desired new value.

While you can’t add new values to a list by assignment, dictionaries do allow you to add new key-value pairs using the assignment operator. In the example below, you add a lemon key to inventory :

In this example, you successfully add a new key-value pair to your inventory with 100 units. This addition is possible because dictionaries don’t have consecutive indices but unique keys, which are safe to add by assignment.

The assignment statement does more than assign the result of a single expression to a single variable. It can also cope nicely with assigning multiple values to multiple variables simultaneously in what’s known as a parallel assignment .

Here’s the general syntax for parallel assignments in Python:

Note that the left side of the statement can be either a tuple or a list of variables. Remember that to create a tuple, you just need a series of comma-separated elements. In this case, these elements must be variables.

The right side of the statement must be a sequence or iterable of values or expressions. In any case, the number of elements in the right operand must match the number of variables on the left. Otherwise, you’ll get a ValueError exception.

In the following example, you compute the two solutions of a quadratic equation using a parallel assignment:

In this example, you first import sqrt() from the math module. Then you initialize the equation’s coefficients in a parallel assignment.

The equation’s solution is computed in another parallel assignment. The left operand contains a tuple of two variables, x1 and x2 . The right operand consists of a tuple of expressions that compute the solutions for the equation. Note how each result is assigned to each variable by position.

A classical use case of parallel assignment is to swap values between variables:

The highlighted line does the magic and swaps the values of previous_value and next_value at the same time. Note that in a programming language that doesn’t support this kind of assignment, you’d have to use a temporary variable to produce the same effect:

In this example, instead of using parallel assignment to swap values between variables, you use a new variable to temporarily store the value of previous_value to avoid losing its reference.

For a concrete example of when you’d need to swap values between variables, say you’re learning how to implement the bubble sort algorithm , and you come up with the following function:

In the highlighted line, you use a parallel assignment to swap values in place if the current value is less than the next value in the input list. To dive deeper into the bubble sort algorithm and into sorting algorithms in general, check out Sorting Algorithms in Python .

You can use assignment statements for iterable unpacking in Python. Unpacking an iterable means assigning its values to a series of variables one by one. The iterable must be the right operand in the assignment, while the variables must be the left operand.

Like in parallel assignments, the variables must come as a tuple or list. The number of variables must match the number of values in the iterable. Alternatively, you can use the unpacking operator ( * ) to grab several values in a variable if the number of variables doesn’t match the iterable length.

Here’s the general syntax for iterable unpacking in Python:

Iterable unpacking is a powerful feature that you can use all around your code. It can help you write more readable and concise code. For example, you may find yourself doing something like this:

Whenever you do something like this in your code, go ahead and replace it with a more readable iterable unpacking using a single and elegant assignment, like in the following code snippet:

The numbers list on the right side contains four values. The assignment operator unpacks these values into the four variables on the left side of the statement. The values in numbers get assigned to variables in the same order that they appear in the iterable. The assignment is done by position.

Note: Because Python sets are also iterables, you can use them in an iterable unpacking operation. However, it won’t be clear which value goes to which variable because sets are unordered data structures.

The above example shows the most common form of iterable unpacking in Python. The main condition for the example to work is that the number of variables matches the number of values in the iterable.

What if you don’t know the iterable length upfront? Will the unpacking work? It’ll work if you use the * operator to pack several values into one of your target variables.

For example, say that you want to unpack the first and second values in numbers into two different variables. Additionally, you would like to pack the rest of the values in a single variable conveniently called rest . In this case, you can use the unpacking operator like in the following code:

In this example, first and second hold the first and second values in numbers , respectively. These values are assigned by position. The * operator packs all the remaining values in the input iterable into rest .

The unpacking operator ( * ) can appear at any position in your series of target variables. However, you can only use one instance of the operator:

The iterable unpacking operator works in any position in your list of variables. Note that you can only use one unpacking operator per assignment. Using more than one unpacking operator isn’t allowed and raises a SyntaxError .

Dropping away unwanted values from the iterable is a common use case for the iterable unpacking operator. Consider the following example:

In Python, if you want to signal that a variable won’t be used, then you use an underscore ( _ ) as the variable’s name. In this example, useful holds the only value that you need to use from the input iterable. The _ variable is a placeholder that guarantees that the unpacking works correctly. You won’t use the values that end up in this disposable variable.

Note: In the example above, if your target iterable is a sequence data type, such as a list or tuple, then it’s best to access its last item directly.

To do this, you can use the -1 index:

Using -1 gives you access to the last item of any sequence data type. In contrast, if you’re dealing with iterators , then you won’t be able to use indices. That’s when the *_ syntax comes to your rescue.

The pattern used in the above example comes in handy when you have a function that returns multiple values, and you only need a few of these values in your code. The os.walk() function may provide a good example of this situation.

This function allows you to iterate over the content of a directory recursively. The function returns a generator object that yields three-item tuples. Each tuple contains the following items:

  • The path to the current directory as a string
  • The names of all the immediate subdirectories as a list of strings
  • The names of all the files in the current directory as a list of strings

Now say that you want to iterate over your home directory and list only the files. You can do something like this:

This code will issue a long output depending on the current content of your home directory. Note that you need to provide a string with the path to your user folder for the example to work. The _ placeholder variable will hold the unwanted data.

In contrast, the filenames variable will hold the list of files in the current directory, which is the data that you need. The code will print the list of filenames. Go ahead and give it a try!

The assignment operator also comes in handy when you need to provide default argument values in your functions and methods. Default argument values allow you to define functions that take arguments with sensible defaults. These defaults allow you to call the function with specific values or to simply rely on the defaults.

As an example, consider the following function:

This function takes one argument, called name . This argument has a sensible default value that’ll be used when you call the function without arguments. To provide this sensible default value, you use an assignment.

Note: According to PEP 8 , the style guide for Python code, you shouldn’t use spaces around the assignment operator when providing default argument values in function definitions.

Here’s how the function works:

If you don’t provide a name during the call to greet() , then the function uses the default value provided in the definition. If you provide a name, then the function uses it instead of the default one.

Up to this point, you’ve learned a lot about the Python assignment operator and how to use it for writing different types of assignment statements. In the following sections, you’ll dive into a great feature of assignment statements in Python. You’ll learn about augmented assignments .

Augmented Assignment Operators in Python

Python supports what are known as augmented assignments . An augmented assignment combines the assignment operator with another operator to make the statement more concise. Most Python math and bitwise operators have an augmented assignment variation that looks something like this:

Note that $ isn’t a valid Python operator. In this example, it’s a placeholder for a generic operator. This statement works as follows:

  • Evaluate expression to produce a value.
  • Run the operation defined by the operator that prefixes the = sign, using the previous value of variable and the return value of expression as operands.
  • Assign the resulting value back to variable .

In practice, an augmented assignment like the above is equivalent to the following statement:

As you can conclude, augmented assignments are syntactic sugar . They provide a shorthand notation for a specific and popular kind of assignment.

For example, say that you need to define a counter variable to count some stuff in your code. You can use the += operator to increment counter by 1 using the following code:

In this example, the += operator, known as augmented addition , adds 1 to the previous value in counter each time you run the statement counter += 1 .

It’s important to note that unlike regular assignments, augmented assignments don’t create new variables. They only allow you to update existing variables. If you use an augmented assignment with an undefined variable, then you get a NameError :

Python evaluates the right side of the statement before assigning the resulting value back to the target variable. In this specific example, when Python tries to compute x + 1 , it finds that x isn’t defined.

Great! You now know that an augmented assignment consists of combining the assignment operator with another operator, like a math or bitwise operator. To continue this discussion, you’ll learn which math operators have an augmented variation in Python.

An equation like x = x + b doesn’t make sense in math. But in programming, a statement like x = x + b is perfectly valid and can be extremely useful. It adds b to x and reassigns the result back to x .

As you already learned, Python provides an operator to shorten x = x + b . Yes, the += operator allows you to write x += b instead. Python also offers augmented assignment operators for most math operators. Here’s a summary:

Operator Description Example Equivalent
Adds the right operand to the left operand and stores the result in the left operand
Subtracts the right operand from the left operand and stores the result in the left operand
Multiplies the right operand with the left operand and stores the result in the left operand
Divides the left operand by the right operand and stores the result in the left operand
Performs of the left operand by the right operand and stores the result in the left operand
Finds the remainder of dividing the left operand by the right operand and stores the result in the left operand
Raises the left operand to the power of the right operand and stores the result in the left operand

The Example column provides generic examples of how to use the operators in actual code. Note that x must be previously defined for the operators to work correctly. On the other hand, y can be either a concrete value or an expression that returns a value.

Note: The matrix multiplication operator ( @ ) doesn’t support augmented assignments yet.

Consider the following example of matrix multiplication using NumPy arrays:

Note that the exception traceback indicates that the operation isn’t supported yet.

To illustrate how augmented assignment operators work, say that you need to create a function that takes an iterable of numeric values and returns their sum. You can write this function like in the code below:

In this function, you first initialize total to 0 . In each iteration, the loop adds a new number to total using the augmented addition operator ( += ). When the loop terminates, total holds the sum of all the input numbers. Variables like total are known as accumulators . The += operator is typically used to update accumulators.

Note: Computing the sum of a series of numeric values is a common operation in programming. Python provides the built-in sum() function for this specific computation.

Another interesting example of using an augmented assignment is when you need to implement a countdown while loop to reverse an iterable. In this case, you can use the -= operator:

In this example, custom_reversed() is a generator function because it uses yield . Calling the function creates an iterator that yields items from the input iterable in reverse order. To decrement the control variable, index , you use an augmented subtraction statement that subtracts 1 from the variable in every iteration.

Note: Similar to summing the values in an iterable, reversing an iterable is also a common requirement. Python provides the built-in reversed() function for this specific computation, so you don’t have to implement your own. The above example only intends to show the -= operator in action.

Finally, counters are a special type of accumulators that allow you to count objects. Here’s an example of a letter counter:

To create this counter, you use a Python dictionary. The keys store the letters. The values store the counts. Again, to increment the counter, you use an augmented addition.

Counters are so common in programming that Python provides a tool specially designed to facilitate the task of counting. Check out Python’s Counter: The Pythonic Way to Count Objects for a complete guide on how to use this tool.

The += and *= augmented assignment operators also work with sequences , such as lists, tuples, and strings. The += operator performs augmented concatenations , while the *= operator performs augmented repetition .

These operators behave differently with mutable and immutable data types:

Operator Description Example
Runs an augmented concatenation operation on the target sequence. Mutable sequences are updated in place. If the sequence is immutable, then a new sequence is created and assigned back to the target name.
Adds to itself times. Mutable sequences are updated in place. If the sequence is immutable, then a new sequence is created and assigned back to the target name.

Note that the augmented concatenation operator operates on two sequences, while the augmented repetition operator works on a sequence and an integer number.

Consider the following examples and pay attention to the result of calling the id() function:

Mutable sequences like lists support the += augmented assignment operator through the .__iadd__() method, which performs an in-place addition. This method mutates the underlying list, appending new values to its end.

Note: If the left operand is mutable, then x += y may not be completely equivalent to x = x + y . For example, if you do list_1 = list_1 + list_2 instead of list_1 += list_2 above, then you’ll create a new list instead of mutating the existing one. This may be important if other variables refer to the same list.

Immutable sequences, such as tuples and strings, don’t provide an .__iadd__() method. Therefore, augmented concatenations fall back to the .__add__() method, which doesn’t modify the sequence in place but returns a new sequence.

There’s another difference between mutable and immutable sequences when you use them in an augmented concatenation. Consider the following examples:

With mutable sequences, the data to be concatenated can come as a list, tuple, string, or any other iterable. In contrast, with immutable sequences, the data can only come as objects of the same type. You can concatenate tuples to tuples and strings to strings, for example.

Again, the augmented repetition operator works with a sequence on the left side of the operator and an integer on the right side. This integer value represents the number of repetitions to get in the resulting sequence:

When the *= operator operates on a mutable sequence, it falls back to the .__imul__() method, which performs the operation in place, modifying the underlying sequence. In contrast, if *= operates on an immutable sequence, then .__mul__() is called, returning a new sequence of the same type.

Note: Values of n less than 0 are treated as 0 , which returns an empty sequence of the same data type as the target sequence on the left side of the *= operand.

Note that a_list[0] is a_list[3] returns True . This is because the *= operator doesn’t make a copy of the repeated data. It only reflects the data. This behavior can be a source of issues when you use the operator with mutable values.

For example, say that you want to create a list of lists to represent a matrix, and you need to initialize the list with n empty lists, like in the following code:

In this example, you use the *= operator to populate matrix with three empty lists. Now check out what happens when you try to populate the first sublist in matrix :

The appended values are reflected in the three sublists. This happens because the *= operator doesn’t make copies of the data that you want to repeat. It only reflects the data. Therefore, every sublist in matrix points to the same object and memory address.

If you ever need to initialize a list with a bunch of empty sublists, then use a list comprehension :

This time, when you populate the first sublist of matrix , your changes aren’t propagated to the other sublists. This is because all the sublists are different objects that live in different memory addresses.

Bitwise operators also have their augmented versions. The logic behind them is similar to that of the math operators. The following table summarizes the augmented bitwise operators that Python provides:

Operator Operation Example Equivalent
Augmented bitwise AND ( )
Augmented bitwise OR ( )
Augmented bitwise XOR ( )
Augmented bitwise right shift
Augmented bitwise left shift

The augmented bitwise assignment operators perform the intended operation by taking the current value of the left operand as a starting point for the computation. Consider the following example, which uses the & and &= operators:

Programmers who work with high-level languages like Python rarely use bitwise operations in day-to-day coding. However, these types of operations can be useful in some situations.

For example, say that you’re implementing a Unix-style permission system for your users to access a given resource. In this case, you can use the characters "r" for reading, "w" for writing, and "x" for execution permissions, respectively. However, using bit-based permissions could be more memory efficient:

You can assign permissions to your users with the OR bitwise operator or the augmented OR bitwise operator. Finally, you can use the bitwise AND operator to check if a user has a certain permission, as you did in the final two examples.

You’ve learned a lot about augmented assignment operators and statements in this and the previous sections. These operators apply to math, concatenation, repetition, and bitwise operations. Now you’re ready to look at other assignment variants that you can use in your code or find in other developers’ code.

Other Assignment Variants

So far, you’ve learned that Python’s assignment statements and the assignment operator are present in many different scenarios and use cases. Those use cases include variable creation and initialization, parallel assignments, iterable unpacking, augmented assignments, and more.

In the following sections, you’ll learn about a few variants of assignment statements that can be useful in your future coding. You can also find these assignment variants in other developers’ code. So, you should be aware of them and know how they work in practice.

In short, you’ll learn about:

  • Annotated assignment statements with type hints
  • Assignment expressions with the walrus operator
  • Managed attribute assignments with properties and descriptors
  • Implicit assignments in Python

These topics will take you through several interesting and useful examples that showcase the power of Python’s assignment statements.

PEP 526 introduced a dedicated syntax for variable annotation back in Python 3.6 . The syntax consists of the variable name followed by a colon ( : ) and the variable type:

Even though these statements declare three variables with their corresponding data types, the variables aren’t actually created or initialized. So, for example, you can’t use any of these variables in an augmented assignment statement:

If you try to use one of the previously declared variables in an augmented assignment, then you get a NameError because the annotation syntax doesn’t define the variable. To actually define it, you need to use an assignment.

The good news is that you can use the variable annotation syntax in an assignment statement with the = operator:

The first statement in this example is what you can call an annotated assignment statement in Python. You may ask yourself why you should use type annotations in this type of assignment if everybody can see that counter holds an integer number. You’re right. In this example, the variable type is unambiguous.

However, imagine what would happen if you found a variable initialization like the following:

What would be the data type of each user in users ? If the initialization of users is far away from the definition of the User class, then there’s no quick way to answer this question. To clarify this ambiguity, you can provide the appropriate type hint for users :

Now you’re clearly communicating that users will hold a list of User instances. Using type hints in assignment statements that initialize variables to empty collection data types—such as lists, tuples, or dictionaries—allows you to provide more context about how your code works. This practice will make your code more explicit and less error-prone.

Up to this point, you’ve learned that regular assignment statements with the = operator don’t have a return value. They just create or update variables. Therefore, you can’t use a regular assignment to assign a value to a variable within the context of an expression.

Python 3.8 changed this by introducing a new type of assignment statement through PEP 572 . This new statement is known as an assignment expression or named expression .

Note: Expressions are a special type of statement in Python. Their distinguishing characteristic is that expressions always have a return value, which isn’t the case with all types of statements.

Unlike regular assignments, assignment expressions have a return value, which is why they’re called expressions in the first place. This return value is automatically assigned to a variable. To write an assignment expression, you must use the walrus operator ( := ), which was named for its resemblance to the eyes and tusks of a walrus lying on its side.

The general syntax of an assignment statement is as follows:

This expression looks like a regular assignment. However, instead of using the assignment operator ( = ), it uses the walrus operator ( := ). For the expression to work correctly, the enclosing parentheses are required in most use cases. However, there are certain situations in which these parentheses are superfluous. Either way, they won’t hurt you.

Assignment expressions come in handy when you want to reuse the result of an expression or part of an expression without using a dedicated assignment to grab this value beforehand.

Note: Assignment expressions with the walrus operator have several practical use cases. They also have a few restrictions. For example, they’re illegal in certain contexts, such as lambda functions, parallel assignments, and augmented assignments.

For a deep dive into this special type of assignment, check out The Walrus Operator: Python’s Assignment Expressions .

A particularly handy use case for assignment expressions is when you need to grab the result of an expression used in the context of a conditional statement. For example, say that you need to write a function to compute the mean of a sample of numeric values. Without the walrus operator, you could do something like this:

In this example, the sample size ( n ) is a value that you need to reuse in two different computations. First, you need to check whether the sample has data points or not. Then you need to use the sample size to compute the mean. To be able to reuse n , you wrote a dedicated assignment statement at the beginning of your function to grab the sample size.

You can avoid this extra step by combining it with the first use of the target value, len(sample) , using an assignment expression like the following:

The assignment expression introduced in the conditional computes the sample size and assigns it to n . This way, you guarantee that you have a reference to the sample size to use in further computations.

Because the assignment expression returns the sample size anyway, the conditional can check whether that size equals 0 or not and then take a certain course of action depending on the result of this check. The return statement computes the sample’s mean and sends the result back to the function caller.

Python provides a few tools that allow you to fine-tune the operations behind the assignment of attributes. The attributes that run implicit operations on assignments are commonly referred to as managed attributes .

Properties are the most commonly used tool for providing managed attributes in your classes. However, you can also use descriptors and, in some cases, the .__setitem__() special method.

To understand what fine-tuning the operation behind an assignment means, say that you need a Point class that only allows numeric values for its coordinates, x and y . To write this class, you must set up a validation mechanism to reject non-numeric values. You can use properties to attach the validation functionality on top of x and y .

Here’s how you can write your class:

In Point , you use properties for the .x and .y coordinates. Each property has a getter and a setter method . The getter method returns the attribute at hand. The setter method runs the input validation using a try … except block and the built-in float() function. Then the method assigns the result to the actual attribute.

Here’s how your class works in practice:

When you use a property-based attribute as the left operand in an assignment statement, Python automatically calls the property’s setter method, running any computation from it.

Because both .x and .y are properties, the input validation runs whenever you assign a value to either attribute. In the first example, the input values are valid numbers and the validation passes. In the final example, "one" isn’t a valid numeric value, so the validation fails.

If you look at your Point class, you’ll note that it follows a repetitive pattern, with the getter and setter methods looking quite similar. To avoid this repetition, you can use a descriptor instead of a property.

A descriptor is a class that implements the descriptor protocol , which consists of four special methods :

  • .__get__() runs when you access the attribute represented by the descriptor.
  • .__set__() runs when you use the attribute in an assignment statement.
  • .__delete__() runs when you use the attribute in a del statement.
  • .__set_name__() sets the attribute’s name, creating a name-aware attribute.

Here’s how your code may look if you use a descriptor to represent the coordinates of your Point class:

You’ve removed repetitive code by defining Coordinate as a descriptor that manages the input validation in a single place. Go ahead and run the following code to try out the new implementation of Point :

Great! The class works as expected. Thanks to the Coordinate descriptor, you now have a more concise and non-repetitive version of your original code.

Another way to fine-tune the operations behind an assignment statement is to provide a custom implementation of .__setitem__() in your class. You’ll use this method in classes representing mutable data collections, such as custom list-like or dictionary-like classes.

As an example, say that you need to create a dictionary-like class that stores its keys in lowercase letters:

In this example, you create a dictionary-like class by subclassing UserDict from collections . Your class implements a .__setitem__() method, which takes key and value as arguments. The method uses str.lower() to convert key into lowercase letters before storing it in the underlying dictionary.

Python implicitly calls .__setitem__() every time you use a key as the left operand in an assignment statement. This behavior allows you to tweak how you process the assignment of keys in your custom dictionary.

Implicit Assignments in Python

Python implicitly runs assignments in many different contexts. In most cases, these implicit assignments are part of the language syntax. In other cases, they support specific behaviors.

Whenever you complete an action in the following list, Python runs an implicit assignment for you:

  • Define or call a function
  • Define or instantiate a class
  • Use the current instance , self
  • Import modules and objects
  • Use a decorator
  • Use the control variable in a for loop or a comprehension
  • Use the as qualifier in with statements , imports, and try … except blocks
  • Access the _ special variable in an interactive session

Behind the scenes, Python performs an assignment in every one of the above situations. In the following subsections, you’ll take a tour of all these situations.

When you define a function, the def keyword implicitly assigns a function object to your function’s name. Here’s an example:

From this point on, the name greet refers to a function object that lives at a given memory address in your computer. You can call the function using its name and a pair of parentheses with appropriate arguments. This way, you can reuse greet() wherever you need it.

If you call your greet() function with fellow as an argument, then Python implicitly assigns the input argument value to the name parameter on the function’s definition. The parameter will hold a reference to the input arguments.

When you define a class with the class keyword, you’re assigning a specific name to a class object . You can later use this name to create instances of that class. Consider the following example:

In this example, the name User holds a reference to a class object, which was defined in __main__.User . Like with a function, when you call the class’s constructor with the appropriate arguments to create an instance, Python assigns the arguments to the parameters defined in the class initializer .

Another example of implicit assignments is the current instance of a class, which in Python is called self by convention. This name implicitly gets a reference to the current object whenever you instantiate a class. Thanks to this implicit assignment, you can access .name and .job from within the class without getting a NameError in your code.

Import statements are another variant of implicit assignments in Python. Through an import statement, you assign a name to a module object, class, function, or any other imported object. This name is then created in your current namespace so that you can access it later in your code:

In this example, you import the sys module object from the standard library and assign it to the sys name, which is now available in your namespace, as you can conclude from the second call to the built-in dir() function.

You also run an implicit assignment when you use a decorator in your code. The decorator syntax is just a shortcut for a formal assignment like the following:

Here, you call decorator() with a function object as an argument. This call will typically add functionality on top of the existing function, func() , and return a function object, which is then reassigned to the func name.

The decorator syntax is syntactic sugar for replacing the previous assignment, which you can now write as follows:

Even though this new code looks pretty different from the above assignment, the code implicitly runs the same steps.

Another situation in which Python automatically runs an implicit assignment is when you use a for loop or a comprehension. In both cases, you can have one or more control variables that you then use in the loop or comprehension body:

The memory address of control_variable changes on each iteration of the loop. This is because Python internally reassigns a new value from the loop iterable to the loop control variable on each cycle.

The same behavior appears in comprehensions:

In the end, comprehensions work like for loops but use a more concise syntax. This comprehension creates a new list of strings that mimic the output from the previous example.

The as keyword in with statements, except clauses, and import statements is another example of an implicit assignment in Python. This time, the assignment isn’t completely implicit because the as keyword provides an explicit way to define the target variable.

In a with statement, the target variable that follows the as keyword will hold a reference to the context manager that you’re working with. As an example, say that you have a hello.txt file with the following content:

You want to open this file and print each of its lines on your screen. In this case, you can use the with statement to open the file using the built-in open() function.

In the example below, you accomplish this. You also add some calls to print() that display information about the target variable defined by the as keyword:

This with statement uses the open() function to open hello.txt . The open() function is a context manager that returns a text file object represented by an io.TextIOWrapper instance.

Since you’ve defined a hello target variable with the as keyword, now that variable holds a reference to the file object itself. You confirm this by printing the object and its memory address. Finally, the for loop iterates over the lines and prints this content to the screen.

When it comes to using the as keyword in the context of an except clause, the target variable will contain an exception object if any exception occurs:

In this example, you run a division that raises a ZeroDivisionError . The as keyword assigns the raised exception to error . Note that when you print the exception object, you get only the message because exceptions have a custom .__str__() method that supports this behavior.

There’s a final detail to remember when using the as specifier in a try … except block like the one in the above example. Once you leave the except block, the target variable goes out of scope , and you can’t use it anymore.

Finally, Python’s import statements also support the as keyword. In this context, you can use as to import objects with a different name:

In these examples, you use the as keyword to import the numpy package with the np name and pandas with the name pd . If you call dir() , then you’ll realize that np and pd are now in your namespace. However, the numpy and pandas names are not.

Using the as keyword in your imports comes in handy when you want to use shorter names for your objects or when you need to use different objects that originally had the same name in your code. It’s also useful when you want to make your imported names non-public using a leading underscore, like in import sys as _sys .

The final implicit assignment that you’ll learn about in this tutorial only occurs when you’re using Python in an interactive session. Every time you run a statement that returns a value, the interpreter stores the result in a special variable denoted by a single underscore character ( _ ).

You can access this special variable as you’d access any other variable:

These examples cover several situations in which Python internally uses the _ variable. The first two examples evaluate expressions. Expressions always have a return value, which is automatically assigned to the _ variable every time.

When it comes to function calls, note that if your function returns a fruitful value, then _ will hold it. In contrast, if your function returns None , then the _ variable will remain untouched.

The next example consists of a regular assignment statement. As you already know, regular assignments don’t return any value, so the _ variable isn’t updated after these statements run. Finally, note that accessing a variable in an interactive session returns the value stored in the target variable. This value is then assigned to the _ variable.

Note that since _ is a regular variable, you can use it in other expressions:

In this example, you first create a list of values. Then you call len() to get the number of values in the list. Python automatically stores this value in the _ variable. Finally, you use _ to compute the mean of your list of values.

Now that you’ve learned about some of the implicit assignments that Python runs under the hood, it’s time to dig into a final assignment-related topic. In the following few sections, you’ll learn about some illegal and dangerous assignments that you should be aware of and avoid in your code.

Illegal and Dangerous Assignments in Python

In Python, you’ll find a few situations in which using assignments is either forbidden or dangerous. You must be aware of these special situations and try to avoid them in your code.

In the following sections, you’ll learn when using assignment statements isn’t allowed in Python. You’ll also learn about some situations in which using assignments should be avoided if you want to keep your code consistent and robust.

You can’t use Python keywords as variable names in assignment statements. This kind of assignment is explicitly forbidden. If you try to use a keyword as a variable name in an assignment, then you get a SyntaxError :

Whenever you try to use a keyword as the left operand in an assignment statement, you get a SyntaxError . Keywords are an intrinsic part of the language and can’t be overridden.

If you ever feel the need to name one of your variables using a Python keyword, then you can append an underscore to the name of your variable:

In this example, you’re using the desired name for your variables. Because you added a final underscore to the names, Python doesn’t recognize them as keywords, so it doesn’t raise an error.

Note: Even though adding an underscore at the end of a name is an officially recommended practice , it can be confusing sometimes. Therefore, try to find an alternative name or use a synonym whenever you find yourself using this convention.

For example, you can write something like this:

In this example, using the name booking_class for your variable is way clearer and more descriptive than using class_ .

You’ll also find that you can use only a few keywords as part of the right operand in an assignment statement. Those keywords will generally define simple statements that return a value or object. These include lambda , and , or , not , True , False , None , in , and is . You can also use the for keyword when it’s part of a comprehension and the if keyword when it’s used as part of a ternary operator .

In an assignment, you can never use a compound statement as the right operand. Compound statements are those that require an indented block, such as for and while loops, conditionals, with statements, try … except blocks, and class or function definitions.

Sometimes, you need to name variables, but the desired or ideal name is already taken and used as a built-in name. If this is your case, think harder and find another name. Don’t shadow the built-in.

Shadowing built-in names can cause hard-to-identify problems in your code. A common example of this issue is using list or dict to name user-defined variables. In this case, you override the corresponding built-in names, which won’t work as expected if you use them later in your code.

Consider the following example:

The exception in this example may sound surprising. How come you can’t use list() to build a list from a call to map() that returns a generator of square numbers?

By using the name list to identify your list of numbers, you shadowed the built-in list name. Now that name points to a list object rather than the built-in class. List objects aren’t callable, so your code no longer works.

In Python, you’ll have nothing that warns against using built-in, standard-library, or even relevant third-party names to identify your own variables. Therefore, you should keep an eye out for this practice. It can be a source of hard-to-debug errors.

In programming, a constant refers to a name associated with a value that never changes during a program’s execution. Unlike other programming languages, Python doesn’t have a dedicated syntax for defining constants. This fact implies that Python doesn’t have constants in the strict sense of the word.

Python only has variables. If you need a constant in Python, then you’ll have to define a variable and guarantee that it won’t change during your code’s execution. To do that, you must avoid using that variable as the left operand in an assignment statement.

To tell other Python programmers that a given variable should be treated as a constant, you must write your variable’s name in capital letters with underscores separating the words. This naming convention has been adopted by the Python community and is a recommendation that you’ll find in the Constants section of PEP 8 .

In the following examples, you define some constants in Python:

The problem with these constants is that they’re actually variables. Nothing prevents you from changing their value during your code’s execution. So, at any time, you can do something like the following:

These assignments modify the value of two of your original constants. Python doesn’t complain about these changes, which can cause issues later in your code. As a Python developer, you must guarantee that named constants in your code remain constant.

The only way to do that is never to use named constants in an assignment statement other than the constant definition.

You’ve learned a lot about Python’s assignment operators and how to use them for writing assignment statements . With this type of statement, you can create, initialize, and update variables according to your needs. Now you have the required skills to fully manage the creation and mutation of variables in your Python code.

In this tutorial, you’ve learned how to:

  • Write assignment statements using Python’s assignment operators
  • Work with augmented assignments in Python
  • Explore assignment variants, like assignment expression and managed attributes
  • Identify illegal and dangerous assignments in Python

Learning about the Python assignment operator and how to use it in assignment statements is a fundamental skill in Python. It empowers you to write reliable and effective Python code.

🐍 Python Tricks 💌

Get a short & sweet Python Trick delivered to your inbox every couple of days. No spam ever. Unsubscribe any time. Curated by the Real Python team.

Python Tricks Dictionary Merge

About Leodanis Pozo Ramos

Leodanis Pozo Ramos

Leodanis is an industrial engineer who loves Python and software development. He's a self-taught Python developer with 6+ years of experience. He's an avid technical writer with a growing number of articles published on Real Python and other sites.

Each tutorial at Real Python is created by a team of developers so that it meets our high quality standards. The team members who worked on this tutorial are:

Aldren Santos

Master Real-World Python Skills With Unlimited Access to Real Python

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

Join us and get access to thousands of tutorials, hands-on video courses, and a community of expert Pythonistas:

What Do You Think?

What’s your #1 takeaway or favorite thing you learned? How are you going to put your newfound skills to use? Leave a comment below and let us know.

Commenting Tips: The most useful comments are those written with the goal of learning from or helping out other students. Get tips for asking good questions and get answers to common questions in our support portal . Looking for a real-time conversation? Visit the Real Python Community Chat or join the next “Office Hours” Live Q&A Session . Happy Pythoning!

Keep Learning

Related Topics: intermediate best-practices python

Keep reading Real Python by creating a free account or signing in:

Already have an account? Sign-In

Almost there! Complete this form and click the button below to gain instant access:

Python's Assignment Operator: Write Robust Assignments (Source Code)

🔒 No spam. We take your privacy seriously.

python tuple variable assignment

  • Python Course
  • Python Basics
  • Interview Questions
  • Python Quiz
  • Popular Packages
  • Python Projects
  • Practice Python
  • AI With Python
  • Learn Python3
  • Python Automation
  • Python Web Dev
  • DSA with Python
  • Python OOPs
  • Dictionaries

Unpacking a Tuple in Python

Python Tuples In python tuples are used to store immutable objects. Python Tuples are very similar to lists except to some situations. Python tuples are immutable means that they can not be modified in whole program.

Packing and Unpacking a Tuple: In Python, there is a very powerful tuple assignment feature that assigns the right-hand side of values into the left-hand side. In another way, it is called unpacking of a tuple of values into a variable. In packing, we put values into a new tuple while in unpacking we extract those values into a single variable.

Example 1 

NOTE : In unpacking of tuple number of variables on left-hand side should be equal to number of values in given tuple a. Python uses a special syntax to pass optional arguments (*args) for tuple unpacking. This means that there can be many number of arguments in place of (*args) in python. All values will be assigned to every variable on the left-hand side and all remaining values will be assigned to *args .For better understanding consider the following code. 

Example 2 

In python tuples can be unpacked using a function in function tuple is passed and in function values are unpacked into normal variable. Consider the following code for better understanding. 

Example 3 : 

author

Please Login to comment...

Similar reads.

  • python-tuple
  • Best PS5 SSDs in 2024: Top Picks for Expanding Your Storage
  • Best Nintendo Switch Controllers in 2024
  • Xbox Game Pass Ultimate: Features, Benefits, and Pricing in 2024
  • Xbox Game Pass vs. Xbox Game Pass Ultimate: Which is Right for You?
  • #geekstreak2024 – 21 Days POTD Challenge Powered By Deutsche Bank

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Python Tutorial

File handling, python modules, python numpy, python pandas, python matplotlib, python scipy, machine learning, python mysql, python mongodb, python reference, module reference, python how to, python examples, python tuples.

Tuples are used to store multiple items in a single variable.

Tuple is one of 4 built-in data types in Python used to store collections of data, the other 3 are List , Set , and Dictionary , all with different qualities and usage.

A tuple is a collection which is ordered and unchangeable .

Tuples are written with round brackets.

Create a Tuple:

Tuple Items

Tuple items are ordered, unchangeable, and allow duplicate values.

Tuple items are indexed, the first item has index [0] , the second item has index [1] etc.

When we say that tuples are ordered, it means that the items have a defined order, and that order will not change.

Unchangeable

Tuples are unchangeable, meaning that we cannot change, add or remove items after the tuple has been created.

Allow Duplicates

Since tuples are indexed, they can have items with the same value:

Tuples allow duplicate values:

Advertisement

Tuple Length

To determine how many items a tuple has, use the len() function:

Print the number of items in the tuple:

Create Tuple With One Item

To create a tuple with only one item, you have to add a comma after the item, otherwise Python will not recognize it as a tuple.

One item tuple, remember the comma:

Tuple Items - Data Types

Tuple items can be of any data type:

String, int and boolean data types:

A tuple can contain different data types:

A tuple with strings, integers and boolean values:

From Python's perspective, tuples are defined as objects with the data type 'tuple':

What is the data type of a tuple?

The tuple() Constructor

It is also possible to use the tuple() constructor to make a tuple.

Using the tuple() method to make a tuple:

Python Collections (Arrays)

There are four collection data types in the Python programming language:

  • List is a collection which is ordered and changeable. Allows duplicate members.
  • Tuple is a collection which is ordered and unchangeable. Allows duplicate members.
  • Set is a collection which is unordered, unchangeable*, and unindexed. No duplicate members.
  • Dictionary is a collection which is ordered** and changeable. No duplicate members.

*Set items are unchangeable, but you can remove and/or add items whenever you like.

**As of Python version 3.7, dictionaries are ordered . In Python 3.6 and earlier, dictionaries are unordered .

When choosing a collection type, it is useful to understand the properties of that type. Choosing the right type for a particular data set could mean retention of meaning, and, it could mean an increase in efficiency or security.

Get Certified

COLOR PICKER

colorpicker

Contact Sales

If you want to use W3Schools services as an educational institution, team or enterprise, send us an e-mail: [email protected]

Report Error

If you want to report an error, or if you want to make a suggestion, send us an e-mail: [email protected]

Top Tutorials

Top references, top examples, get certified.

  • python3 >

Pythonのタプル(tuple)の使い方・タプルの操作・例題について

記事内に広告が含まれています。

python tuple variable assignment

  • 2.1 インデックスによるアクセス
  • 2.4 要素のカウントと検索
  • 2.5 タプルのアンパック
  • 3.1 例題 1:要素の取得
  • 3.2 例題 2:要素の入れ替え
  • 3.3 例題 3:タプルの結合
  • 3.4 例題 4:要素が含まれているか確認

Pythonのタプル (tuple) は、順序付けられた 変更不可 のデータの集合を表すデータ型です。リスト (list) に似ていますが、タプルは一度作成すると要素を追加、削除、変更することができません。

タプルは丸括弧 () を使って作成します。複数の要素を持つタプルは、各要素をカンマ , で区切って定義します。

python tuple variable assignment

  • 順序性 : タプル内の要素は順序を持っており、インデックスを使って要素にアクセスできます。
  • 変更不可(イミュータブル) : タプルの要素を変更することはできません。一度作成したタプルの要素を追加、削除、変更することはできないため、タプルはリストに比べて安全です。

仮に要素を変更しようとすると次のようになります。

要素を変更したい場合は変数を更新することで、要素を更新することができます。

タプルには、リストと同様に多くの操作が可能です。以下にいくつかの基本操作を示します。

インデックスによるアクセス

python tuple variable assignment

  • count() : 指定した要素がタプル内に何回現れるかを返します。
  • index() : 指定した要素が最初に現れる位置(インデックス)を返します。

python tuple variable assignment

タプルの要素を個別の変数に展開( アンパック )することができます。

python tuple variable assignment

アンパックする際、変数の数はタプルの要素数と一致させる必要があります。また、 * を使うことで残りの要素をリストにまとめることも可能です。

python tuple variable assignment

  • my_tuple[0] で最初の要素を取得します。
  • my_tuple[-1] で最後の要素を取得します。
  • 両方の要素を新しいタプルにまとめます。

python tuple variable assignment

例題 2:要素の入れ替え

(1, 4, 3, 2, 5)

  • list() を使ってタプルをリストに変換します。
  • リストの2番目と4番目の要素を入れ替えます。
  • リストをタプルに戻します。

python tuple variable assignment

例題 3:タプルの結合

(1, 2, 3, 4)

  • タプル同士を + 演算子で結合します。

python tuple variable assignment

例題 4:要素が含まれているか確認

  • in 演算子を使って、タプルに 3 が含まれているか確認します。

python tuple variable assignment

  • Python »
  • 3.9.20 Documentation »
  • The Python Standard Library »
  • Generic Operating System Services »

os — Miscellaneous operating system interfaces ¶

Source code: Lib/os.py

This module provides a portable way of using operating system dependent functionality. If you just want to read or write a file see open() , if you want to manipulate paths, see the os.path module, and if you want to read all the lines in all the files on the command line see the fileinput module. For creating temporary files and directories see the tempfile module, and for high-level file and directory handling see the shutil module.

Notes on the availability of these functions:

The design of all built-in operating system dependent modules of Python is such that as long as the same functionality is available, it uses the same interface; for example, the function os.stat(path) returns stat information about path in the same format (which happens to have originated with the POSIX interface).

Extensions peculiar to a particular operating system are also available through the os module, but using them is of course a threat to portability.

All functions accepting path or file names accept both bytes and string objects, and result in an object of the same type, if a path or file name is returned.

On VxWorks, os.fork, os.execv and os.spawn*p* are not supported.

All functions in this module raise OSError (or subclasses thereof) in the case of invalid or inaccessible file names and paths, or other arguments that have the correct type, but are not accepted by the operating system.

An alias for the built-in OSError exception.

The name of the operating system dependent module imported. The following names have currently been registered: 'posix' , 'nt' , 'java' .

Ayrıca bkz.

sys.platform has a finer granularity. os.uname() gives system-dependent version information.

The platform module provides detailed checks for the system’s identity.

File Names, Command Line Arguments, and Environment Variables ¶

In Python, file names, command line arguments, and environment variables are represented using the string type. On some systems, decoding these strings to and from bytes is necessary before passing them to the operating system. Python uses the file system encoding to perform this conversion (see sys.getfilesystemencoding() ).

3.1 sürümünde değişti: On some systems, conversion using the file system encoding may fail. In this case, Python uses the surrogateescape encoding error handler , which means that undecodable bytes are replaced by a Unicode character U+DCxx on decoding, and these are again translated to the original byte on encoding.

The file system encoding must guarantee to successfully decode all bytes below 128. If the file system encoding fails to provide this guarantee, API functions may raise UnicodeErrors.

Process Parameters ¶

These functions and data items provide information and operate on the current process and user.

Return the filename corresponding to the controlling terminal of the process.

Availability : Unix.

A mapping object where keys and values are strings that represent the process environment. For example, environ['HOME'] is the pathname of your home directory (on some platforms), and is equivalent to getenv("HOME") in C.

This mapping is captured the first time the os module is imported, typically during Python startup as part of processing site.py . Changes to the environment made after this time are not reflected in os.environ , except for changes made by modifying os.environ directly.

This mapping may be used to modify the environment as well as query the environment. putenv() will be called automatically when the mapping is modified.

On Unix, keys and values use sys.getfilesystemencoding() and 'surrogateescape' error handler. Use environb if you would like to use a different encoding.

Calling putenv() directly does not change os.environ , so it’s better to modify os.environ .

On some platforms, including FreeBSD and macOS, setting environ may cause memory leaks. Refer to the system documentation for putenv() .

You can delete items in this mapping to unset environment variables. unsetenv() will be called automatically when an item is deleted from os.environ , and when one of the pop() or clear() methods is called.

3.9 sürümünde değişti: Updated to support PEP 584 ’s merge ( | ) and update ( |= ) operators.

Bytes version of environ : a mapping object where both keys and values are bytes objects representing the process environment. environ and environb are synchronized (modifying environb updates environ , and vice versa).

environb is only available if supports_bytes_environ is True .

3.2 sürümüyle geldi.

These functions are described in Files and Directories .

Encode path-like filename to the filesystem encoding with 'surrogateescape' error handler, or 'strict' on Windows; return bytes unchanged.

fsdecode() is the reverse function.

3.6 sürümünde değişti: Support added to accept objects implementing the os.PathLike interface.

Decode the path-like filename from the filesystem encoding with 'surrogateescape' error handler, or 'strict' on Windows; return str unchanged.

fsencode() is the reverse function.

Return the file system representation of the path.

If str or bytes is passed in, it is returned unchanged. Otherwise __fspath__() is called and its value is returned as long as it is a str or bytes object. In all other cases, TypeError is raised.

3.6 sürümüyle geldi.

An abstract base class for objects representing a file system path, e.g. pathlib.PurePath .

Return the file system path representation of the object.

The method should only return a str or bytes object, with the preference being for str .

Return the value of the environment variable key if it exists, or default if it doesn’t. key , default and the result are str. Note that since getenv() uses os.environ , the mapping of getenv() is similarly also captured on import, and the function may not reflect future environment changes.

On Unix, keys and values are decoded with sys.getfilesystemencoding() and 'surrogateescape' error handler. Use os.getenvb() if you would like to use a different encoding.

Availability : most flavors of Unix, Windows.

Return the value of the environment variable key if it exists, or default if it doesn’t. key , default and the result are bytes. Note that since getenvb() uses os.environb , the mapping of getenvb() is similarly also captured on import, and the function may not reflect future environment changes.

getenvb() is only available if supports_bytes_environ is True .

Availability : most flavors of Unix.

Returns the list of directories that will be searched for a named executable, similar to a shell, when launching a process. env , when specified, should be an environment variable dictionary to lookup the PATH in. By default, when env is None , environ is used.

Return the effective group id of the current process. This corresponds to the “set id” bit on the file being executed in the current process.

Return the current process’s effective user id.

Return the real group id of the current process.

Return list of group ids that user belongs to. If group is not in the list, it is included; typically, group is specified as the group ID field from the password record for user , because that group ID will otherwise be potentially omitted.

3.3 sürümüyle geldi.

Return list of supplemental group ids associated with the current process.

On macOS, getgroups() behavior differs somewhat from other Unix platforms. If the Python interpreter was built with a deployment target of 10.5 or earlier, getgroups() returns the list of effective group ids associated with the current user process; this list is limited to a system-defined number of entries, typically 16, and may be modified by calls to setgroups() if suitably privileged. If built with a deployment target greater than 10.5 , getgroups() returns the current group access list for the user associated with the effective user id of the process; the group access list may change over the lifetime of the process, it is not affected by calls to setgroups() , and its length is not limited to 16. The deployment target value, MACOSX_DEPLOYMENT_TARGET , can be obtained with sysconfig.get_config_var() .

Return the name of the user logged in on the controlling terminal of the process. For most purposes, it is more useful to use getpass.getuser() since the latter checks the environment variables LOGNAME or USERNAME to find out who the user is, and falls back to pwd.getpwuid(os.getuid())[0] to get the login name of the current real user id.

Availability : Unix, Windows.

Return the process group id of the process with process id pid . If pid is 0, the process group id of the current process is returned.

Return the id of the current process group.

Return the current process id.

Return the parent’s process id. When the parent process has exited, on Unix the id returned is the one of the init process (1), on Windows it is still the same id, which may be already reused by another process.

3.2 sürümünde değişti: Added support for Windows.

Get program scheduling priority. The value which is one of PRIO_PROCESS , PRIO_PGRP , or PRIO_USER , and who is interpreted relative to which (a process identifier for PRIO_PROCESS , process group identifier for PRIO_PGRP , and a user ID for PRIO_USER ). A zero value for who denotes (respectively) the calling process, the process group of the calling process, or the real user ID of the calling process.

Parameters for the getpriority() and setpriority() functions.

Return a tuple (ruid, euid, suid) denoting the current process’s real, effective, and saved user ids.

Return a tuple (rgid, egid, sgid) denoting the current process’s real, effective, and saved group ids.

Return the current process’s real user id.

Call the system initgroups() to initialize the group access list with all of the groups of which the specified username is a member, plus the specified group id.

Set the environment variable named key to the string value . Such changes to the environment affect subprocesses started with os.system() , popen() or fork() and execv() .

Assignments to items in os.environ are automatically translated into corresponding calls to putenv() ; however, calls to putenv() don’t update os.environ , so it is actually preferable to assign to items of os.environ . This also applies to getenv() and getenvb() , which respectively use os.environ and os.environb in their implementations.

Raises an auditing event os.putenv with arguments key , value .

3.9 sürümünde değişti: The function is now always available.

Set the current process’s effective group id.

Set the current process’s effective user id.

Set the current process’ group id.

Set the list of supplemental group ids associated with the current process to groups . groups must be a sequence, and each element must be an integer identifying a group. This operation is typically available only to the superuser.

On macOS, the length of groups may not exceed the system-defined maximum number of effective group ids, typically 16. See the documentation for getgroups() for cases where it may not return the same group list set by calling setgroups().

Call the system call setpgrp() or setpgrp(0, 0) depending on which version is implemented (if any). See the Unix manual for the semantics.

Call the system call setpgid() to set the process group id of the process with id pid to the process group with id pgrp . See the Unix manual for the semantics.

Set program scheduling priority. The value which is one of PRIO_PROCESS , PRIO_PGRP , or PRIO_USER , and who is interpreted relative to which (a process identifier for PRIO_PROCESS , process group identifier for PRIO_PGRP , and a user ID for PRIO_USER ). A zero value for who denotes (respectively) the calling process, the process group of the calling process, or the real user ID of the calling process. priority is a value in the range -20 to 19. The default priority is 0; lower priorities cause more favorable scheduling.

Set the current process’s real and effective group ids.

Set the current process’s real, effective, and saved group ids.

Set the current process’s real, effective, and saved user ids.

Set the current process’s real and effective user ids.

Call the system call getsid() . See the Unix manual for the semantics.

Call the system call setsid() . See the Unix manual for the semantics.

Set the current process’s user id.

Return the error message corresponding to the error code in code . On platforms where strerror() returns NULL when given an unknown error number, ValueError is raised.

True if the native OS type of the environment is bytes (eg. False on Windows).

Set the current numeric umask and return the previous umask.

Returns information identifying the current operating system. The return value is an object with five attributes:

sysname - operating system name

nodename - name of machine on network (implementation-defined)

release - operating system release

version - operating system version

machine - hardware identifier

For backwards compatibility, this object is also iterable, behaving like a five-tuple containing sysname , nodename , release , version , and machine in that order.

Some systems truncate nodename to 8 characters or to the leading component; a better way to get the hostname is socket.gethostname() or even socket.gethostbyaddr(socket.gethostname()) .

Availability : recent flavors of Unix.

3.3 sürümünde değişti: Return type changed from a tuple to a tuple-like object with named attributes.

Unset (delete) the environment variable named key . Such changes to the environment affect subprocesses started with os.system() , popen() or fork() and execv() .

Deletion of items in os.environ is automatically translated into a corresponding call to unsetenv() ; however, calls to unsetenv() don’t update os.environ , so it is actually preferable to delete items of os.environ .

Raises an auditing event os.unsetenv with argument key .

3.9 sürümünde değişti: The function is now always available and is also available on Windows.

File Object Creation ¶

These functions create new file objects . (See also open() for opening file descriptors.)

Return an open file object connected to the file descriptor fd . This is an alias of the open() built-in function and accepts the same arguments. The only difference is that the first argument of fdopen() must always be an integer.

File Descriptor Operations ¶

These functions operate on I/O streams referenced using file descriptors.

File descriptors are small integers corresponding to a file that has been opened by the current process. For example, standard input is usually file descriptor 0, standard output is 1, and standard error is 2. Further files opened by a process will then be assigned 3, 4, 5, and so forth. The name “file descriptor” is slightly deceptive; on Unix platforms, sockets and pipes are also referenced by file descriptors.

The fileno() method can be used to obtain the file descriptor associated with a file object when required. Note that using the file descriptor directly will bypass the file object methods, ignoring aspects such as internal buffering of data.

Close file descriptor fd .

This function is intended for low-level I/O and must be applied to a file descriptor as returned by os.open() or pipe() . To close a “file object” returned by the built-in function open() or by popen() or fdopen() , use its close() method.

Close all file descriptors from fd_low (inclusive) to fd_high (exclusive), ignoring errors. Equivalent to (but much faster than):

Copy count bytes from file descriptor src , starting from offset offset_src , to file descriptor dst , starting from offset offset_dst . If offset_src is None, then src is read from the current position; respectively for offset_dst . The files pointed by src and dst must reside in the same filesystem, otherwise an OSError is raised with errno set to errno.EXDEV .

This copy is done without the additional cost of transferring data from the kernel to user space and then back into the kernel. Additionally, some filesystems could implement extra optimizations. The copy is done as if both files are opened as binary.

The return value is the amount of bytes copied. This could be less than the amount requested.

Availability : Linux kernel >= 4.5 or glibc >= 2.27.

3.8 sürümüyle geldi.

Return a string describing the encoding of the device associated with fd if it is connected to a terminal; else return None .

Return a duplicate of file descriptor fd . The new file descriptor is non-inheritable .

On Windows, when duplicating a standard stream (0: stdin, 1: stdout, 2: stderr), the new file descriptor is inheritable .

3.4 sürümünde değişti: The new file descriptor is now non-inheritable.

Duplicate file descriptor fd to fd2 , closing the latter first if necessary. Return fd2 . The new file descriptor is inheritable by default or non-inheritable if inheritable is False .

3.4 sürümünde değişti: Add the optional inheritable parameter.

3.7 sürümünde değişti: Return fd2 on success. Previously, None was always returned.

Change the mode of the file given by fd to the numeric mode . See the docs for chmod() for possible values of mode . As of Python 3.3, this is equivalent to os.chmod(fd, mode) .

Raises an auditing event os.chmod with arguments path , mode , dir_fd .

Change the owner and group id of the file given by fd to the numeric uid and gid . To leave one of the ids unchanged, set it to -1. See chown() . As of Python 3.3, this is equivalent to os.chown(fd, uid, gid) .

Raises an auditing event os.chown with arguments path , uid , gid , dir_fd .

Force write of file with filedescriptor fd to disk. Does not force update of metadata.

This function is not available on MacOS.

Return system configuration information relevant to an open file. name specifies the configuration value to retrieve; it may be a string which is the name of a defined system value; these names are specified in a number of standards (POSIX.1, Unix 95, Unix 98, and others). Some platforms define additional names as well. The names known to the host operating system are given in the pathconf_names dictionary. For configuration variables not included in that mapping, passing an integer for name is also accepted.

If name is a string and is not known, ValueError is raised. If a specific value for name is not supported by the host system, even if it is included in pathconf_names , an OSError is raised with errno.EINVAL for the error number.

As of Python 3.3, this is equivalent to os.pathconf(fd, name) .

Get the status of the file descriptor fd . Return a stat_result object.

As of Python 3.3, this is equivalent to os.stat(fd) .

The stat() function.

Return information about the filesystem containing the file associated with file descriptor fd , like statvfs() . As of Python 3.3, this is equivalent to os.statvfs(fd) .

Force write of file with filedescriptor fd to disk. On Unix, this calls the native fsync() function; on Windows, the MS _commit() function.

If you’re starting with a buffered Python file object f , first do f.flush() , and then do os.fsync(f.fileno()) , to ensure that all internal buffers associated with f are written to disk.

Truncate the file corresponding to file descriptor fd , so that it is at most length bytes in size. As of Python 3.3, this is equivalent to os.truncate(fd, length) .

Raises an auditing event os.truncate with arguments fd , length .

3.5 sürümünde değişti: Added support for Windows

Get the blocking mode of the file descriptor: False if the O_NONBLOCK flag is set, True if the flag is cleared.

See also set_blocking() and socket.socket.setblocking() .

3.5 sürümüyle geldi.

Return True if the file descriptor fd is open and connected to a tty(-like) device, else False .

Apply, test or remove a POSIX lock on an open file descriptor. fd is an open file descriptor. cmd specifies the command to use - one of F_LOCK , F_TLOCK , F_ULOCK or F_TEST . len specifies the section of the file to lock.

Raises an auditing event os.lockf with arguments fd , cmd , len .

Flags that specify what action lockf() will take.

Set the current position of file descriptor fd to position pos , modified by how : SEEK_SET or 0 to set the position relative to the beginning of the file; SEEK_CUR or 1 to set it relative to the current position; SEEK_END or 2 to set it relative to the end of the file. Return the new cursor position in bytes, starting from the beginning.

Parameters to the lseek() function. Their values are 0, 1, and 2, respectively.

3.3 sürümüyle geldi: Some operating systems could support additional values, like os.SEEK_HOLE or os.SEEK_DATA .

Open the file path and set various flags according to flags and possibly its mode according to mode . When computing mode , the current umask value is first masked out. Return the file descriptor for the newly opened file. The new file descriptor is non-inheritable .

For a description of the flag and mode values, see the C run-time documentation; flag constants (like O_RDONLY and O_WRONLY ) are defined in the os module. In particular, on Windows adding O_BINARY is needed to open files in binary mode.

This function can support paths relative to directory descriptors with the dir_fd parameter.

Raises an auditing event open with arguments path , mode , flags .

This function is intended for low-level I/O. For normal usage, use the built-in function open() , which returns a file object with read() and write() methods (and many more). To wrap a file descriptor in a file object, use fdopen() .

3.3 sürümüyle geldi: The dir_fd argument.

3.5 sürümünde değişti: If the system call is interrupted and the signal handler does not raise an exception, the function now retries the system call instead of raising an InterruptedError exception (see PEP 475 for the rationale).

3.6 sürümünde değişti: Accepts a path-like object .

The following constants are options for the flags parameter to the open() function. They can be combined using the bitwise OR operator | . Some of them are not available on all platforms. For descriptions of their availability and use, consult the open(2) manual page on Unix or the MSDN on Windows.

The above constants are available on Unix and Windows.

The above constants are only available on Unix.

3.3 sürümünde değişti: Add O_CLOEXEC constant.

The above constants are only available on Windows.

The above constants are extensions and not present if they are not defined by the C library.

3.4 sürümünde değişti: Add O_PATH on systems that support it. Add O_TMPFILE , only available on Linux Kernel 3.11 or newer.

Open a new pseudo-terminal pair. Return a pair of file descriptors (master, slave) for the pty and the tty, respectively. The new file descriptors are non-inheritable . For a (slightly) more portable approach, use the pty module.

Availability : some flavors of Unix.

3.4 sürümünde değişti: The new file descriptors are now non-inheritable.

Create a pipe. Return a pair of file descriptors (r, w) usable for reading and writing, respectively. The new file descriptor is non-inheritable .

Create a pipe with flags set atomically. flags can be constructed by ORing together one or more of these values: O_NONBLOCK , O_CLOEXEC . Return a pair of file descriptors (r, w) usable for reading and writing, respectively.

Ensures that enough disk space is allocated for the file specified by fd starting from offset and continuing for len bytes.

Announces an intention to access data in a specific pattern thus allowing the kernel to make optimizations. The advice applies to the region of the file specified by fd starting at offset and continuing for len bytes. advice is one of POSIX_FADV_NORMAL , POSIX_FADV_SEQUENTIAL , POSIX_FADV_RANDOM , POSIX_FADV_NOREUSE , POSIX_FADV_WILLNEED or POSIX_FADV_DONTNEED .

Flags that can be used in advice in posix_fadvise() that specify the access pattern that is likely to be used.

Read at most n bytes from file descriptor fd at a position of offset , leaving the file offset unchanged.

Return a bytestring containing the bytes read. If the end of the file referred to by fd has been reached, an empty bytes object is returned.

Read from a file descriptor fd at a position of offset into mutable bytes-like objects buffers , leaving the file offset unchanged. Transfer data into each buffer until it is full and then move on to the next buffer in the sequence to hold the rest of the data.

The flags argument contains a bitwise OR of zero or more of the following flags:

Return the total number of bytes actually read which can be less than the total capacity of all the objects.

The operating system may set a limit ( sysconf() value 'SC_IOV_MAX' ) on the number of buffers that can be used.

Combine the functionality of os.readv() and os.pread() .

Availability : Linux 2.6.30 and newer, FreeBSD 6.0 and newer, OpenBSD 2.7 and newer, AIX 7.1 and newer. Using flags requires Linux 4.6 or newer.

3.7 sürümüyle geldi.

Do not wait for data which is not immediately available. If this flag is specified, the system call will return instantly if it would have to read data from the backing storage or wait for a lock.

If some data was successfully read, it will return the number of bytes read. If no bytes were read, it will return -1 and set errno to errno.EAGAIN .

Availability : Linux 4.14 and newer.

High priority read/write. Allows block-based filesystems to use polling of the device, which provides lower latency, but may use additional resources.

Currently, on Linux, this feature is usable only on a file descriptor opened using the O_DIRECT flag.

Availability : Linux 4.6 and newer.

Write the bytestring in str to file descriptor fd at position of offset , leaving the file offset unchanged.

Return the number of bytes actually written.

Write the buffers contents to file descriptor fd at a offset offset , leaving the file offset unchanged. buffers must be a sequence of bytes-like objects . Buffers are processed in array order. Entire contents of the first buffer is written before proceeding to the second, and so on.

Return the total number of bytes actually written.

Combine the functionality of os.writev() and os.pwrite() .

Availability : Linux 2.6.30 and newer, FreeBSD 6.0 and newer, OpenBSD 2.7 and newer, AIX 7.1 and newer. Using flags requires Linux 4.7 or newer.

Provide a per-write equivalent of the O_DSYNC open(2) flag. This flag effect applies only to the data range written by the system call.

Availability : Linux 4.7 and newer.

Provide a per-write equivalent of the O_SYNC open(2) flag. This flag effect applies only to the data range written by the system call.

Read at most n bytes from file descriptor fd .

This function is intended for low-level I/O and must be applied to a file descriptor as returned by os.open() or pipe() . To read a “file object” returned by the built-in function open() or by popen() or fdopen() , or sys.stdin , use its read() or readline() methods.

Copy count bytes from file descriptor in_fd to file descriptor out_fd starting at offset . Return the number of bytes sent. When EOF is reached return 0 .

The first function notation is supported by all platforms that define sendfile() .

On Linux, if offset is given as None , the bytes are read from the current position of in_fd and the position of in_fd is updated.

The second case may be used on macOS and FreeBSD where headers and trailers are arbitrary sequences of buffers that are written before and after the data from in_fd is written. It returns the same as the first case.

On macOS and FreeBSD, a value of 0 for count specifies to send until the end of in_fd is reached.

All platforms support sockets as out_fd file descriptor, and some platforms allow other types (e.g. regular file, pipe) as well.

Cross-platform applications should not use headers , trailers and flags arguments.

For a higher-level wrapper of sendfile() , see socket.socket.sendfile() .

3.9 sürümünde değişti: Parameters out and in was renamed to out_fd and in_fd .

Set the blocking mode of the specified file descriptor. Set the O_NONBLOCK flag if blocking is False , clear the flag otherwise.

See also get_blocking() and socket.socket.setblocking() .

Parameters to the sendfile() function, if the implementation supports them.

Read from a file descriptor fd into a number of mutable bytes-like objects buffers . Transfer data into each buffer until it is full and then move on to the next buffer in the sequence to hold the rest of the data.

Return the process group associated with the terminal given by fd (an open file descriptor as returned by os.open() ).

Set the process group associated with the terminal given by fd (an open file descriptor as returned by os.open() ) to pg .

Return a string which specifies the terminal device associated with file descriptor fd . If fd is not associated with a terminal device, an exception is raised.

Write the bytestring in str to file descriptor fd .

This function is intended for low-level I/O and must be applied to a file descriptor as returned by os.open() or pipe() . To write a “file object” returned by the built-in function open() or by popen() or fdopen() , or sys.stdout or sys.stderr , use its write() method.

Write the contents of buffers to file descriptor fd . buffers must be a sequence of bytes-like objects . Buffers are processed in array order. Entire contents of the first buffer is written before proceeding to the second, and so on.

Returns the total number of bytes actually written.

Querying the size of a terminal ¶

Return the size of the terminal window as (columns, lines) , tuple of type terminal_size .

The optional argument fd (default STDOUT_FILENO , or standard output) specifies which file descriptor should be queried.

If the file descriptor is not connected to a terminal, an OSError is raised.

shutil.get_terminal_size() is the high-level function which should normally be used, os.get_terminal_size is the low-level implementation.

A subclass of tuple, holding (columns, lines) of the terminal window size.

Width of the terminal window in characters.

Height of the terminal window in characters.

Inheritance of File Descriptors ¶

3.4 sürümüyle geldi.

A file descriptor has an “inheritable” flag which indicates if the file descriptor can be inherited by child processes. Since Python 3.4, file descriptors created by Python are non-inheritable by default.

On UNIX, non-inheritable file descriptors are closed in child processes at the execution of a new program, other file descriptors are inherited.

On Windows, non-inheritable handles and file descriptors are closed in child processes, except for standard streams (file descriptors 0, 1 and 2: stdin, stdout and stderr), which are always inherited. Using spawn* functions, all inheritable handles and all inheritable file descriptors are inherited. Using the subprocess module, all file descriptors except standard streams are closed, and inheritable handles are only inherited if the close_fds parameter is False .

Get the “inheritable” flag of the specified file descriptor (a boolean).

Set the “inheritable” flag of the specified file descriptor.

Get the “inheritable” flag of the specified handle (a boolean).

Availability : Windows.

Set the “inheritable” flag of the specified handle.

Files and Directories ¶

On some Unix platforms, many of these functions support one or more of these features:

specifying a file descriptor: Normally the path argument provided to functions in the os module must be a string specifying a file path. However, some functions now alternatively accept an open file descriptor for their path argument. The function will then operate on the file referred to by the descriptor. (For POSIX systems, Python will call the variant of the function prefixed with f (e.g. call fchdir instead of chdir ).)

You can check whether or not path can be specified as a file descriptor for a particular function on your platform using os.supports_fd . If this functionality is unavailable, using it will raise a NotImplementedError .

If the function also supports dir_fd or follow_symlinks arguments, it’s an error to specify one of those when supplying path as a file descriptor.

paths relative to directory descriptors: If dir_fd is not None , it should be a file descriptor referring to a directory, and the path to operate on should be relative; path will then be relative to that directory. If the path is absolute, dir_fd is ignored. (For POSIX systems, Python will call the variant of the function with an at suffix and possibly prefixed with f (e.g. call faccessat instead of access ).

You can check whether or not dir_fd is supported for a particular function on your platform using os.supports_dir_fd . If it’s unavailable, using it will raise a NotImplementedError .

not following symlinks: If follow_symlinks is False , and the last element of the path to operate on is a symbolic link, the function will operate on the symbolic link itself rather than the file pointed to by the link. (For POSIX systems, Python will call the l... variant of the function.)

You can check whether or not follow_symlinks is supported for a particular function on your platform using os.supports_follow_symlinks . If it’s unavailable, using it will raise a NotImplementedError .

Use the real uid/gid to test for access to path . Note that most operations will use the effective uid/gid, therefore this routine can be used in a suid/sgid environment to test if the invoking user has the specified access to path . mode should be F_OK to test the existence of path , or it can be the inclusive OR of one or more of R_OK , W_OK , and X_OK to test permissions. Return True if access is allowed, False if not. See the Unix man page access(2) for more information.

This function can support specifying paths relative to directory descriptors and not following symlinks .

If effective_ids is True , access() will perform its access checks using the effective uid/gid instead of the real uid/gid. effective_ids may not be supported on your platform; you can check whether or not it is available using os.supports_effective_ids . If it is unavailable, using it will raise a NotImplementedError .

Using access() to check if a user is authorized to e.g. open a file before actually doing so using open() creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it. It’s preferable to use EAFP techniques. For example:

is better written as:

I/O operations may fail even when access() indicates that they would succeed, particularly for operations on network filesystems which may have permissions semantics beyond the usual POSIX permission-bit model.

3.3 sürümünde değişti: Added the dir_fd , effective_ids , and follow_symlinks parameters.

Values to pass as the mode parameter of access() to test the existence, readability, writability and executability of path , respectively.

Change the current working directory to path .

This function can support specifying a file descriptor . The descriptor must refer to an opened directory, not an open file.

This function can raise OSError and subclasses such as FileNotFoundError , PermissionError , and NotADirectoryError .

Raises an auditing event os.chdir with argument path .

3.3 sürümüyle geldi: Added support for specifying path as a file descriptor on some platforms.

Set the flags of path to the numeric flags . flags may take a combination (bitwise OR) of the following values (as defined in the stat module):

stat.UF_NODUMP

stat.UF_IMMUTABLE

stat.UF_APPEND

stat.UF_OPAQUE

stat.UF_NOUNLINK

stat.UF_COMPRESSED

stat.UF_HIDDEN

stat.SF_ARCHIVED

stat.SF_IMMUTABLE

stat.SF_APPEND

stat.SF_NOUNLINK

stat.SF_SNAPSHOT

This function can support not following symlinks .

Raises an auditing event os.chflags with arguments path , flags .

3.3 sürümüyle geldi: The follow_symlinks argument.

Change the mode of path to the numeric mode . mode may take one of the following values (as defined in the stat module) or bitwise ORed combinations of them:

stat.S_ISUID

stat.S_ISGID

stat.S_ENFMT

stat.S_ISVTX

stat.S_IREAD

stat.S_IWRITE

stat.S_IEXEC

stat.S_IRWXU

stat.S_IRUSR

stat.S_IWUSR

stat.S_IXUSR

stat.S_IRWXG

stat.S_IRGRP

stat.S_IWGRP

stat.S_IXGRP

stat.S_IRWXO

stat.S_IROTH

stat.S_IWOTH

stat.S_IXOTH

This function can support specifying a file descriptor , paths relative to directory descriptors and not following symlinks .

Although Windows supports chmod() , you can only set the file’s read-only flag with it (via the stat.S_IWRITE and stat.S_IREAD constants or a corresponding integer value). All other bits are ignored.

3.3 sürümüyle geldi: Added support for specifying path as an open file descriptor, and the dir_fd and follow_symlinks arguments.

Change the owner and group id of path to the numeric uid and gid . To leave one of the ids unchanged, set it to -1.

See shutil.chown() for a higher-level function that accepts names in addition to numeric ids.

3.6 sürümünde değişti: Supports a path-like object .

Change the root directory of the current process to path .

Change the current working directory to the directory represented by the file descriptor fd . The descriptor must refer to an opened directory, not an open file. As of Python 3.3, this is equivalent to os.chdir(fd) .

Return a string representing the current working directory.

Return a bytestring representing the current working directory.

3.8 sürümünde değişti: The function now uses the UTF-8 encoding on Windows, rather than the ANSI code page: see PEP 529 for the rationale. The function is no longer deprecated on Windows.

Set the flags of path to the numeric flags , like chflags() , but do not follow symbolic links. As of Python 3.3, this is equivalent to os.chflags(path, flags, follow_symlinks=False) .

Change the mode of path to the numeric mode . If path is a symlink, this affects the symlink rather than the target. See the docs for chmod() for possible values of mode . As of Python 3.3, this is equivalent to os.chmod(path, mode, follow_symlinks=False) .

Change the owner and group id of path to the numeric uid and gid . This function will not follow symbolic links. As of Python 3.3, this is equivalent to os.chown(path, uid, gid, follow_symlinks=False) .

Create a hard link pointing to src named dst .

This function can support specifying src_dir_fd and/or dst_dir_fd to supply paths relative to directory descriptors , and not following symlinks .

Raises an auditing event os.link with arguments src , dst , src_dir_fd , dst_dir_fd .

3.2 sürümünde değişti: Added Windows support.

3.3 sürümüyle geldi: Added the src_dir_fd , dst_dir_fd , and follow_symlinks arguments.

3.6 sürümünde değişti: Accepts a path-like object for src and dst .

Return a list containing the names of the entries in the directory given by path . The list is in arbitrary order, and does not include the special entries '.' and '..' even if they are present in the directory. If a file is removed from or added to the directory during the call of this function, whether a name for that file be included is unspecified.

path may be a path-like object . If path is of type bytes (directly or indirectly through the PathLike interface), the filenames returned will also be of type bytes ; in all other circumstances, they will be of type str .

This function can also support specifying a file descriptor ; the file descriptor must refer to a directory.

Raises an auditing event os.listdir with argument path .

To encode str filenames to bytes , use fsencode() .

The scandir() function returns directory entries along with file attribute information, giving better performance for many common use cases.

3.2 sürümünde değişti: The path parameter became optional.

3.3 sürümüyle geldi: Added support for specifying path as an open file descriptor.

Perform the equivalent of an lstat() system call on the given path. Similar to stat() , but does not follow symbolic links. Return a stat_result object.

On platforms that do not support symbolic links, this is an alias for stat() .

As of Python 3.3, this is equivalent to os.stat(path, dir_fd=dir_fd, follow_symlinks=False) .

This function can also support paths relative to directory descriptors .

3.2 sürümünde değişti: Added support for Windows 6.0 (Vista) symbolic links.

3.3 sürümünde değişti: Added the dir_fd parameter.

3.8 sürümünde değişti: On Windows, now opens reparse points that represent another path (name surrogates), including symbolic links and directory junctions. Other kinds of reparse points are resolved by the operating system as for stat() .

Create a directory named path with numeric mode mode .

If the directory already exists, FileExistsError is raised. If a parent directory in the path does not exist, FileNotFoundError is raised.

On some systems, mode is ignored. Where it is used, the current umask value is first masked out. If bits other than the last 9 (i.e. the last 3 digits of the octal representation of the mode ) are set, their meaning is platform-dependent. On some platforms, they are ignored and you should call chmod() explicitly to set them.

On Windows, a mode of 0o700 is specifically handled to apply access control to the new directory such that only the current user and administrators have access. Other values of mode are ignored.

It is also possible to create temporary directories; see the tempfile module’s tempfile.mkdtemp() function.

Raises an auditing event os.mkdir with arguments path , mode , dir_fd .

3.9.20 sürümünde değişti: Windows now handles a mode of 0o700 .

Recursive directory creation function. Like mkdir() , but makes all intermediate-level directories needed to contain the leaf directory.

The mode parameter is passed to mkdir() for creating the leaf directory; see the mkdir() description for how it is interpreted. To set the file permission bits of any newly-created parent directories you can set the umask before invoking makedirs() . The file permission bits of existing parent directories are not changed.

If exist_ok is False (the default), an FileExistsError is raised if the target directory already exists.

makedirs() will become confused if the path elements to create include pardir (eg. “..” on UNIX systems).

This function handles UNC paths correctly.

3.2 sürümüyle geldi: The exist_ok parameter.

3.4.1 sürümünde değişti: Before Python 3.4.1, if exist_ok was True and the directory existed, makedirs() would still raise an error if mode did not match the mode of the existing directory. Since this behavior was impossible to implement safely, it was removed in Python 3.4.1. See bpo-21082 .

3.7 sürümünde değişti: The mode argument no longer affects the file permission bits of newly-created intermediate-level directories.

Create a FIFO (a named pipe) named path with numeric mode mode . The current umask value is first masked out from the mode.

FIFOs are pipes that can be accessed like regular files. FIFOs exist until they are deleted (for example with os.unlink() ). Generally, FIFOs are used as rendezvous between “client” and “server” type processes: the server opens the FIFO for reading, and the client opens it for writing. Note that mkfifo() doesn’t open the FIFO — it just creates the rendezvous point.

Create a filesystem node (file, device special file or named pipe) named path . mode specifies both the permissions to use and the type of node to be created, being combined (bitwise OR) with one of stat.S_IFREG , stat.S_IFCHR , stat.S_IFBLK , and stat.S_IFIFO (those constants are available in stat ). For stat.S_IFCHR and stat.S_IFBLK , device defines the newly created device special file (probably using os.makedev() ), otherwise it is ignored.

Extract the device major number from a raw device number (usually the st_dev or st_rdev field from stat ).

Extract the device minor number from a raw device number (usually the st_dev or st_rdev field from stat ).

Compose a raw device number from the major and minor device numbers.

Return system configuration information relevant to a named file. name specifies the configuration value to retrieve; it may be a string which is the name of a defined system value; these names are specified in a number of standards (POSIX.1, Unix 95, Unix 98, and others). Some platforms define additional names as well. The names known to the host operating system are given in the pathconf_names dictionary. For configuration variables not included in that mapping, passing an integer for name is also accepted.

This function can support specifying a file descriptor .

Dictionary mapping names accepted by pathconf() and fpathconf() to the integer values defined for those names by the host operating system. This can be used to determine the set of names known to the system.

Return a string representing the path to which the symbolic link points. The result may be either an absolute or relative pathname; if it is relative, it may be converted to an absolute pathname using os.path.join(os.path.dirname(path), result) .

If the path is a string object (directly or indirectly through a PathLike interface), the result will also be a string object, and the call may raise a UnicodeDecodeError. If the path is a bytes object (direct or indirectly), the result will be a bytes object.

When trying to resolve a path that may contain links, use realpath() to properly handle recursion and platform differences.

3.6 sürümünde değişti: Accepts a path-like object on Unix.

3.8 sürümünde değişti: Accepts a path-like object and a bytes object on Windows.

3.8 sürümünde değişti: Added support for directory junctions, and changed to return the substitution path (which typically includes \\?\ prefix) rather than the optional “print name” field that was previously returned.

Remove (delete) the file path . If path is a directory, an IsADirectoryError is raised. Use rmdir() to remove directories. If the file does not exist, a FileNotFoundError is raised.

This function can support paths relative to directory descriptors .

On Windows, attempting to remove a file that is in use causes an exception to be raised; on Unix, the directory entry is removed but the storage allocated to the file is not made available until the original file is no longer in use.

This function is semantically identical to unlink() .

Raises an auditing event os.remove with arguments path , dir_fd .

Remove directories recursively. Works like rmdir() except that, if the leaf directory is successfully removed, removedirs() tries to successively remove every parent directory mentioned in path until an error is raised (which is ignored, because it generally means that a parent directory is not empty). For example, os.removedirs('foo/bar/baz') will first remove the directory 'foo/bar/baz' , and then remove 'foo/bar' and 'foo' if they are empty. Raises OSError if the leaf directory could not be successfully removed.

Rename the file or directory src to dst . If dst exists, the operation will fail with an OSError subclass in a number of cases:

On Windows, if dst exists a FileExistsError is always raised.

On Unix, if src is a file and dst is a directory or vice-versa, an IsADirectoryError or a NotADirectoryError will be raised respectively. If both are directories and dst is empty, dst will be silently replaced. If dst is a non-empty directory, an OSError is raised. If both are files, dst it will be replaced silently if the user has permission. The operation may fail on some Unix flavors if src and dst are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement).

This function can support specifying src_dir_fd and/or dst_dir_fd to supply paths relative to directory descriptors .

If you want cross-platform overwriting of the destination, use replace() .

Raises an auditing event os.rename with arguments src , dst , src_dir_fd , dst_dir_fd .

3.3 sürümüyle geldi: The src_dir_fd and dst_dir_fd arguments.

Recursive directory or file renaming function. Works like rename() , except creation of any intermediate directories needed to make the new pathname good is attempted first. After the rename, directories corresponding to rightmost path segments of the old name will be pruned away using removedirs() .

This function can fail with the new directory structure made if you lack permissions needed to remove the leaf directory or file.

3.6 sürümünde değişti: Accepts a path-like object for old and new .

Rename the file or directory src to dst . If dst is a non-empty directory, OSError will be raised. If dst exists and is a file, it will be replaced silently if the user has permission. The operation may fail if src and dst are on different filesystems. If successful, the renaming will be an atomic operation (this is a POSIX requirement).

Remove (delete) the directory path . If the directory does not exist or is not empty, an FileNotFoundError or an OSError is raised respectively. In order to remove whole directory trees, shutil.rmtree() can be used.

Raises an auditing event os.rmdir with arguments path , dir_fd .

3.3 sürümüyle geldi: The dir_fd parameter.

Return an iterator of os.DirEntry objects corresponding to the entries in the directory given by path . The entries are yielded in arbitrary order, and the special entries '.' and '..' are not included. If a file is removed from or added to the directory after creating the iterator, whether an entry for that file be included is unspecified.

Using scandir() instead of listdir() can significantly increase the performance of code that also needs file type or file attribute information, because os.DirEntry objects expose this information if the operating system provides it when scanning a directory. All os.DirEntry methods may perform a system call, but is_dir() and is_file() usually only require a system call for symbolic links; os.DirEntry.stat() always requires a system call on Unix but only requires one for symbolic links on Windows.

path may be a path-like object . If path is of type bytes (directly or indirectly through the PathLike interface), the type of the name and path attributes of each os.DirEntry will be bytes ; in all other circumstances, they will be of type str .

Raises an auditing event os.scandir with argument path .

The scandir() iterator supports the context manager protocol and has the following method:

Close the iterator and free acquired resources.

This is called automatically when the iterator is exhausted or garbage collected, or when an error happens during iterating. However it is advisable to call it explicitly or use the with statement.

The following example shows a simple use of scandir() to display all the files (excluding directories) in the given path that don’t start with '.' . The entry.is_file() call will generally not make an additional system call:

On Unix-based systems, scandir() uses the system’s opendir() and readdir() functions. On Windows, it uses the Win32 FindFirstFileW and FindNextFileW functions.

3.6 sürümüyle geldi: Added support for the context manager protocol and the close() method. If a scandir() iterator is neither exhausted nor explicitly closed a ResourceWarning will be emitted in its destructor.

The function accepts a path-like object .

3.7 sürümünde değişti: Added support for file descriptors on Unix.

Object yielded by scandir() to expose the file path and other file attributes of a directory entry.

scandir() will provide as much of this information as possible without making additional system calls. When a stat() or lstat() system call is made, the os.DirEntry object will cache the result.

os.DirEntry instances are not intended to be stored in long-lived data structures; if you know the file metadata has changed or if a long time has elapsed since calling scandir() , call os.stat(entry.path) to fetch up-to-date information.

Because the os.DirEntry methods can make operating system calls, they may also raise OSError . If you need very fine-grained control over errors, you can catch OSError when calling one of the os.DirEntry methods and handle as appropriate.

To be directly usable as a path-like object , os.DirEntry implements the PathLike interface.

Attributes and methods on a os.DirEntry instance are as follows:

The entry’s base filename, relative to the scandir() path argument.

The name attribute will be bytes if the scandir() path argument is of type bytes and str otherwise. Use fsdecode() to decode byte filenames.

The entry’s full path name: equivalent to os.path.join(scandir_path, entry.name) where scandir_path is the scandir() path argument. The path is only absolute if the scandir() path argument was absolute. If the scandir() path argument was a file descriptor , the path attribute is the same as the name attribute.

The path attribute will be bytes if the scandir() path argument is of type bytes and str otherwise. Use fsdecode() to decode byte filenames.

Return the inode number of the entry.

The result is cached on the os.DirEntry object. Use os.stat(entry.path, follow_symlinks=False).st_ino to fetch up-to-date information.

On the first, uncached call, a system call is required on Windows but not on Unix.

Return True if this entry is a directory or a symbolic link pointing to a directory; return False if the entry is or points to any other kind of file, or if it doesn’t exist anymore.

If follow_symlinks is False , return True only if this entry is a directory (without following symlinks); return False if the entry is any other kind of file or if it doesn’t exist anymore.

The result is cached on the os.DirEntry object, with a separate cache for follow_symlinks True and False . Call os.stat() along with stat.S_ISDIR() to fetch up-to-date information.

On the first, uncached call, no system call is required in most cases. Specifically, for non-symlinks, neither Windows or Unix require a system call, except on certain Unix file systems, such as network file systems, that return dirent.d_type == DT_UNKNOWN . If the entry is a symlink, a system call will be required to follow the symlink unless follow_symlinks is False .

This method can raise OSError , such as PermissionError , but FileNotFoundError is caught and not raised.

Return True if this entry is a file or a symbolic link pointing to a file; return False if the entry is or points to a directory or other non-file entry, or if it doesn’t exist anymore.

If follow_symlinks is False , return True only if this entry is a file (without following symlinks); return False if the entry is a directory or other non-file entry, or if it doesn’t exist anymore.

The result is cached on the os.DirEntry object. Caching, system calls made, and exceptions raised are as per is_dir() .

Return True if this entry is a symbolic link (even if broken); return False if the entry points to a directory or any kind of file, or if it doesn’t exist anymore.

The result is cached on the os.DirEntry object. Call os.path.islink() to fetch up-to-date information.

On the first, uncached call, no system call is required in most cases. Specifically, neither Windows or Unix require a system call, except on certain Unix file systems, such as network file systems, that return dirent.d_type == DT_UNKNOWN .

Return a stat_result object for this entry. This method follows symbolic links by default; to stat a symbolic link add the follow_symlinks=False argument.

On Unix, this method always requires a system call. On Windows, it only requires a system call if follow_symlinks is True and the entry is a reparse point (for example, a symbolic link or directory junction).

On Windows, the st_ino , st_dev and st_nlink attributes of the stat_result are always set to zero. Call os.stat() to get these attributes.

The result is cached on the os.DirEntry object, with a separate cache for follow_symlinks True and False . Call os.stat() to fetch up-to-date information.

Note that there is a nice correspondence between several attributes and methods of os.DirEntry and of pathlib.Path . In particular, the name attribute has the same meaning, as do the is_dir() , is_file() , is_symlink() and stat() methods.

3.6 sürümünde değişti: Added support for the PathLike interface. Added support for bytes paths on Windows.

Get the status of a file or a file descriptor. Perform the equivalent of a stat() system call on the given path. path may be specified as either a string or bytes – directly or indirectly through the PathLike interface – or as an open file descriptor. Return a stat_result object.

This function normally follows symlinks; to stat a symlink add the argument follow_symlinks=False , or use lstat() .

This function can support specifying a file descriptor and not following symlinks .

On Windows, passing follow_symlinks=False will disable following all name-surrogate reparse points, which includes symlinks and directory junctions. Other types of reparse points that do not resemble links or that the operating system is unable to follow will be opened directly. When following a chain of multiple links, this may result in the original link being returned instead of the non-link that prevented full traversal. To obtain stat results for the final path in this case, use the os.path.realpath() function to resolve the path name as far as possible and call lstat() on the result. This does not apply to dangling symlinks or junction points, which will raise the usual exceptions.

fstat() and lstat() functions.

3.3 sürümüyle geldi: Added the dir_fd and follow_symlinks arguments, specifying a file descriptor instead of a path.

3.8 sürümünde değişti: On Windows, all reparse points that can be resolved by the operating system are now followed, and passing follow_symlinks=False disables following all name surrogate reparse points. If the operating system reaches a reparse point that it is not able to follow, stat now returns the information for the original path as if follow_symlinks=False had been specified instead of raising an error.

Object whose attributes correspond roughly to the members of the stat structure. It is used for the result of os.stat() , os.fstat() and os.lstat() .

Attributes:

File mode: file type and file mode bits (permissions).

Platform dependent, but if non-zero, uniquely identifies the file for a given value of st_dev . Typically:

the inode number on Unix,

the file index on Windows

Identifier of the device on which this file resides.

Number of hard links.

User identifier of the file owner.

Group identifier of the file owner.

Size of the file in bytes, if it is a regular file or a symbolic link. The size of a symbolic link is the length of the pathname it contains, without a terminating null byte.

Timestamps:

Time of most recent access expressed in seconds.

Time of most recent content modification expressed in seconds.

Platform dependent:

the time of most recent metadata change on Unix,

the time of creation on Windows, expressed in seconds.

Time of most recent access expressed in nanoseconds as an integer.

Time of most recent content modification expressed in nanoseconds as an integer.

the time of creation on Windows, expressed in nanoseconds as an integer.

The exact meaning and resolution of the st_atime , st_mtime , and st_ctime attributes depend on the operating system and the file system. For example, on Windows systems using the FAT or FAT32 file systems, st_mtime has 2-second resolution, and st_atime has only 1-day resolution. See your operating system documentation for details.

Similarly, although st_atime_ns , st_mtime_ns , and st_ctime_ns are always expressed in nanoseconds, many systems do not provide nanosecond precision. On systems that do provide nanosecond precision, the floating-point object used to store st_atime , st_mtime , and st_ctime cannot preserve all of it, and as such will be slightly inexact. If you need the exact timestamps you should always use st_atime_ns , st_mtime_ns , and st_ctime_ns .

On some Unix systems (such as Linux), the following attributes may also be available:

Number of 512-byte blocks allocated for file. This may be smaller than st_size /512 when the file has holes.

“Preferred” blocksize for efficient file system I/O. Writing to a file in smaller chunks may cause an inefficient read-modify-rewrite.

Type of device if an inode device.

User defined flags for file.

On other Unix systems (such as FreeBSD), the following attributes may be available (but may be only filled out if root tries to use them):

File generation number.

Time of file creation.

On Solaris and derivatives, the following attributes may also be available:

String that uniquely identifies the type of the filesystem that contains the file.

On macOS systems, the following attributes may also be available:

Real size of the file.

Creator of the file.

On Windows systems, the following attributes are also available:

Windows file attributes: dwFileAttributes member of the BY_HANDLE_FILE_INFORMATION structure returned by GetFileInformationByHandle() . See the FILE_ATTRIBUTE_* constants in the stat module.

When st_file_attributes has the FILE_ATTRIBUTE_REPARSE_POINT set, this field contains the tag identifying the type of reparse point. See the IO_REPARSE_TAG_* constants in the stat module.

The standard module stat defines functions and constants that are useful for extracting information from a stat structure. (On Windows, some items are filled with dummy values.)

For backward compatibility, a stat_result instance is also accessible as a tuple of at least 10 integers giving the most important (and portable) members of the stat structure, in the order st_mode , st_ino , st_dev , st_nlink , st_uid , st_gid , st_size , st_atime , st_mtime , st_ctime . More items may be added at the end by some implementations. For compatibility with older Python versions, accessing stat_result as a tuple always returns integers.

3.3 sürümüyle geldi: Added the st_atime_ns , st_mtime_ns , and st_ctime_ns members.

3.5 sürümüyle geldi: Added the st_file_attributes member on Windows.

3.5 sürümünde değişti: Windows now returns the file index as st_ino when available.

3.7 sürümüyle geldi: Added the st_fstype member to Solaris/derivatives.

3.8 sürümüyle geldi: Added the st_reparse_tag member on Windows.

3.8 sürümünde değişti: On Windows, the st_mode member now identifies special files as S_IFCHR , S_IFIFO or S_IFBLK as appropriate.

Perform a statvfs() system call on the given path. The return value is an object whose attributes describe the filesystem on the given path, and correspond to the members of the statvfs structure, namely: f_bsize , f_frsize , f_blocks , f_bfree , f_bavail , f_files , f_ffree , f_favail , f_flag , f_namemax , f_fsid .

Two module-level constants are defined for the f_flag attribute’s bit-flags: if ST_RDONLY is set, the filesystem is mounted read-only, and if ST_NOSUID is set, the semantics of setuid/setgid bits are disabled or not supported.

Additional module-level constants are defined for GNU/glibc based systems. These are ST_NODEV (disallow access to device special files), ST_NOEXEC (disallow program execution), ST_SYNCHRONOUS (writes are synced at once), ST_MANDLOCK (allow mandatory locks on an FS), ST_WRITE (write on file/directory/symlink), ST_APPEND (append-only file), ST_IMMUTABLE (immutable file), ST_NOATIME (do not update access times), ST_NODIRATIME (do not update directory access times), ST_RELATIME (update atime relative to mtime/ctime).

3.2 sürümünde değişti: The ST_RDONLY and ST_NOSUID constants were added.

3.4 sürümünde değişti: The ST_NODEV , ST_NOEXEC , ST_SYNCHRONOUS , ST_MANDLOCK , ST_WRITE , ST_APPEND , ST_IMMUTABLE , ST_NOATIME , ST_NODIRATIME , and ST_RELATIME constants were added.

3.7 sürümüyle geldi: Added f_fsid .

A set object indicating which functions in the os module accept an open file descriptor for their dir_fd parameter. Different platforms provide different features, and the underlying functionality Python uses to implement the dir_fd parameter is not available on all platforms Python supports. For consistency’s sake, functions that may support dir_fd always allow specifying the parameter, but will throw an exception if the functionality is used when it’s not locally available. (Specifying None for dir_fd is always supported on all platforms.)

To check whether a particular function accepts an open file descriptor for its dir_fd parameter, use the in operator on supports_dir_fd . As an example, this expression evaluates to True if os.stat() accepts open file descriptors for dir_fd on the local platform:

Currently dir_fd parameters only work on Unix platforms; none of them work on Windows.

A set object indicating whether os.access() permits specifying True for its effective_ids parameter on the local platform. (Specifying False for effective_ids is always supported on all platforms.) If the local platform supports it, the collection will contain os.access() ; otherwise it will be empty.

This expression evaluates to True if os.access() supports effective_ids=True on the local platform:

Currently effective_ids is only supported on Unix platforms; it does not work on Windows.

A set object indicating which functions in the os module permit specifying their path parameter as an open file descriptor on the local platform. Different platforms provide different features, and the underlying functionality Python uses to accept open file descriptors as path arguments is not available on all platforms Python supports.

To determine whether a particular function permits specifying an open file descriptor for its path parameter, use the in operator on supports_fd . As an example, this expression evaluates to True if os.chdir() accepts open file descriptors for path on your local platform:

A set object indicating which functions in the os module accept False for their follow_symlinks parameter on the local platform. Different platforms provide different features, and the underlying functionality Python uses to implement follow_symlinks is not available on all platforms Python supports. For consistency’s sake, functions that may support follow_symlinks always allow specifying the parameter, but will throw an exception if the functionality is used when it’s not locally available. (Specifying True for follow_symlinks is always supported on all platforms.)

To check whether a particular function accepts False for its follow_symlinks parameter, use the in operator on supports_follow_symlinks . As an example, this expression evaluates to True if you may specify follow_symlinks=False when calling os.stat() on the local platform:

Create a symbolic link pointing to src named dst .

On Windows, a symlink represents either a file or a directory, and does not morph to the target dynamically. If the target is present, the type of the symlink will be created to match. Otherwise, the symlink will be created as a directory if target_is_directory is True or a file symlink (the default) otherwise. On non-Windows platforms, target_is_directory is ignored.

On newer versions of Windows 10, unprivileged accounts can create symlinks if Developer Mode is enabled. When Developer Mode is not available/enabled, the SeCreateSymbolicLinkPrivilege privilege is required, or the process must be run as an administrator.

OSError is raised when the function is called by an unprivileged user.

Raises an auditing event os.symlink with arguments src , dst , dir_fd .

3.3 sürümüyle geldi: Added the dir_fd argument, and now allow target_is_directory on non-Windows platforms.

3.8 sürümünde değişti: Added support for unelevated symlinks on Windows with Developer Mode.

Force write of everything to disk.

Truncate the file corresponding to path , so that it is at most length bytes in size.

Raises an auditing event os.truncate with arguments path , length .

Remove (delete) the file path . This function is semantically identical to remove() ; the unlink name is its traditional Unix name. Please see the documentation for remove() for further information.

Set the access and modified times of the file specified by path .

utime() takes two optional parameters, times and ns . These specify the times set on path and are used as follows:

If ns is specified, it must be a 2-tuple of the form (atime_ns, mtime_ns) where each member is an int expressing nanoseconds.

If times is not None , it must be a 2-tuple of the form (atime, mtime) where each member is an int or float expressing seconds.

If times is None and ns is unspecified, this is equivalent to specifying ns=(atime_ns, mtime_ns) where both times are the current time.

It is an error to specify tuples for both times and ns .

Note that the exact times you set here may not be returned by a subsequent stat() call, depending on the resolution with which your operating system records access and modification times; see stat() . The best way to preserve exact times is to use the st_atime_ns and st_mtime_ns fields from the os.stat() result object with the ns parameter to utime .

Raises an auditing event os.utime with arguments path , times , ns , dir_fd .

3.3 sürümüyle geldi: Added support for specifying path as an open file descriptor, and the dir_fd , follow_symlinks , and ns parameters.

Generate the file names in a directory tree by walking the tree either top-down or bottom-up. For each directory in the tree rooted at directory top (including top itself), it yields a 3-tuple (dirpath, dirnames, filenames) .

dirpath is a string, the path to the directory. dirnames is a list of the names of the subdirectories in dirpath (excluding '.' and '..' ). filenames is a list of the names of the non-directory files in dirpath . Note that the names in the lists contain no path components. To get a full path (which begins with top ) to a file or directory in dirpath , do os.path.join(dirpath, name) . Whether or not the lists are sorted depends on the file system. If a file is removed from or added to the dirpath directory during generating the lists, whether a name for that file be included is unspecified.

If optional argument topdown is True or not specified, the triple for a directory is generated before the triples for any of its subdirectories (directories are generated top-down). If topdown is False , the triple for a directory is generated after the triples for all of its subdirectories (directories are generated bottom-up). No matter the value of topdown , the list of subdirectories is retrieved before the tuples for the directory and its subdirectories are generated.

When topdown is True , the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames ; this can be used to prune the search, impose a specific order of visiting, or even to inform walk() about directories the caller creates or renames before it resumes walk() again. Modifying dirnames when topdown is False has no effect on the behavior of the walk, because in bottom-up mode the directories in dirnames are generated before dirpath itself is generated.

By default, errors from the scandir() call are ignored. If optional argument onerror is specified, it should be a function; it will be called with one argument, an OSError instance. It can report the error to continue with the walk, or raise the exception to abort the walk. Note that the filename is available as the filename attribute of the exception object.

By default, walk() will not walk down into symbolic links that resolve to directories. Set followlinks to True to visit directories pointed to by symlinks, on systems that support them.

Be aware that setting followlinks to True can lead to infinite recursion if a link points to a parent directory of itself. walk() does not keep track of the directories it visited already.

If you pass a relative pathname, don’t change the current working directory between resumptions of walk() . walk() never changes the current directory, and assumes that its caller doesn’t either.

This example displays the number of bytes taken by non-directory files in each directory under the starting directory, except that it doesn’t look under any CVS subdirectory:

In the next example (simple implementation of shutil.rmtree() ), walking the tree bottom-up is essential, rmdir() doesn’t allow deleting a directory before the directory is empty:

Raises an auditing event os.walk with arguments top , topdown , onerror , followlinks .

3.5 sürümünde değişti: This function now calls os.scandir() instead of os.listdir() , making it faster by reducing the number of calls to os.stat() .

This behaves exactly like walk() , except that it yields a 4-tuple (dirpath, dirnames, filenames, dirfd) , and it supports dir_fd .

dirpath , dirnames and filenames are identical to walk() output, and dirfd is a file descriptor referring to the directory dirpath .

This function always supports paths relative to directory descriptors and not following symlinks . Note however that, unlike other functions, the fwalk() default value for follow_symlinks is False .

Since fwalk() yields file descriptors, those are only valid until the next iteration step, so you should duplicate them (e.g. with dup() ) if you want to keep them longer.

In the next example, walking the tree bottom-up is essential: rmdir() doesn’t allow deleting a directory before the directory is empty:

Raises an auditing event os.fwalk with arguments top , topdown , onerror , follow_symlinks , dir_fd .

3.7 sürümünde değişti: Added support for bytes paths.

Create an anonymous file and return a file descriptor that refers to it. flags must be one of the os.MFD_* constants available on the system (or a bitwise ORed combination of them). By default, the new file descriptor is non-inheritable .

The name supplied in name is used as a filename and will be displayed as the target of the corresponding symbolic link in the directory /proc/self/fd/ . The displayed name is always prefixed with memfd: and serves only for debugging purposes. Names do not affect the behavior of the file descriptor, and as such multiple files can have the same name without any side effects.

Availability : Linux 3.17 or newer with glibc 2.27 or newer.

These flags can be passed to memfd_create() .

Availability : Linux 3.17 or newer with glibc 2.27 or newer. The MFD_HUGE* flags are only available since Linux 4.14.

Linux extended attributes ¶

These functions are all available on Linux only.

Return the value of the extended filesystem attribute attribute for path . attribute can be bytes or str (directly or indirectly through the PathLike interface). If it is str, it is encoded with the filesystem encoding.

Raises an auditing event os.getxattr with arguments path , attribute .

3.6 sürümünde değişti: Accepts a path-like object for path and attribute .

Return a list of the extended filesystem attributes on path . The attributes in the list are represented as strings decoded with the filesystem encoding. If path is None , listxattr() will examine the current directory.

Raises an auditing event os.listxattr with argument path .

Removes the extended filesystem attribute attribute from path . attribute should be bytes or str (directly or indirectly through the PathLike interface). If it is a string, it is encoded with the filesystem encoding.

Raises an auditing event os.removexattr with arguments path , attribute .

Set the extended filesystem attribute attribute on path to value . attribute must be a bytes or str with no embedded NULs (directly or indirectly through the PathLike interface). If it is a str, it is encoded with the filesystem encoding. flags may be XATTR_REPLACE or XATTR_CREATE . If XATTR_REPLACE is given and the attribute does not exist, ENODATA will be raised. If XATTR_CREATE is given and the attribute already exists, the attribute will not be created and EEXISTS will be raised.

A bug in Linux kernel versions less than 2.6.39 caused the flags argument to be ignored on some filesystems.

Raises an auditing event os.setxattr with arguments path , attribute , value , flags .

The maximum size the value of an extended attribute can be. Currently, this is 64 KiB on Linux.

This is a possible value for the flags argument in setxattr() . It indicates the operation must create an attribute.

This is a possible value for the flags argument in setxattr() . It indicates the operation must replace an existing attribute.

Process Management ¶

These functions may be used to create and manage processes.

The various exec* functions take a list of arguments for the new program loaded into the process. In each case, the first of these arguments is passed to the new program as its own name rather than as an argument a user may have typed on a command line. For the C programmer, this is the argv[0] passed to a program’s main() . For example, os.execv('/bin/echo', ['foo', 'bar']) will only print bar on standard output; foo will seem to be ignored.

Generate a SIGABRT signal to the current process. On Unix, the default behavior is to produce a core dump; on Windows, the process immediately returns an exit code of 3 . Be aware that calling this function will not call the Python signal handler registered for SIGABRT with signal.signal() .

Add a path to the DLL search path.

This search path is used when resolving dependencies for imported extension modules (the module itself is resolved through sys.path ), and also by ctypes .

Remove the directory by calling close() on the returned object or using it in a with statement.

See the Microsoft documentation for more information about how DLLs are loaded.

Raises an auditing event os.add_dll_directory with argument path .

3.8 sürümüyle geldi: Previous versions of CPython would resolve DLLs using the default behavior for the current process. This led to inconsistencies, such as only sometimes searching PATH or the current working directory, and OS functions such as AddDllDirectory having no effect.

In 3.8, the two primary ways DLLs are loaded now explicitly override the process-wide behavior to ensure consistency. See the porting notes for information on updating libraries.

These functions all execute a new program, replacing the current process; they do not return. On Unix, the new executable is loaded into the current process, and will have the same process id as the caller. Errors will be reported as OSError exceptions.

The current process is replaced immediately. Open file objects and descriptors are not flushed, so if there may be data buffered on these open files, you should flush them using sys.stdout.flush() or os.fsync() before calling an exec* function.

The “l” and “v” variants of the exec* functions differ in how command-line arguments are passed. The “l” variants are perhaps the easiest to work with if the number of parameters is fixed when the code is written; the individual parameters simply become additional parameters to the execl*() functions. The “v” variants are good when the number of parameters is variable, with the arguments being passed in a list or tuple as the args parameter. In either case, the arguments to the child process should start with the name of the command being run, but this is not enforced.

The variants which include a “p” near the end ( execlp() , execlpe() , execvp() , and execvpe() ) will use the PATH environment variable to locate the program file . When the environment is being replaced (using one of the exec*e variants, discussed in the next paragraph), the new environment is used as the source of the PATH variable. The other variants, execl() , execle() , execv() , and execve() , will not use the PATH variable to locate the executable; path must contain an appropriate absolute or relative path.

For execle() , execlpe() , execve() , and execvpe() (note that these all end in “e”), the env parameter must be a mapping which is used to define the environment variables for the new process (these are used instead of the current process’ environment); the functions execl() , execlp() , execv() , and execvp() all cause the new process to inherit the environment of the current process.

For execve() on some platforms, path may also be specified as an open file descriptor. This functionality may not be supported on your platform; you can check whether or not it is available using os.supports_fd . If it is unavailable, using it will raise a NotImplementedError .

Raises an auditing event os.exec with arguments path , args , env .

3.3 sürümüyle geldi: Added support for specifying path as an open file descriptor for execve() .

Exit the process with status n , without calling cleanup handlers, flushing stdio buffers, etc.

The standard way to exit is sys.exit(n) . _exit() should normally only be used in the child process after a fork() .

The following exit codes are defined and can be used with _exit() , although they are not required. These are typically used for system programs written in Python, such as a mail server’s external command delivery program.

Some of these may not be available on all Unix platforms, since there is some variation. These constants are defined where they are defined by the underlying platform.

Exit code that means no error occurred.

Exit code that means the command was used incorrectly, such as when the wrong number of arguments are given.

Exit code that means the input data was incorrect.

Exit code that means an input file did not exist or was not readable.

Exit code that means a specified user did not exist.

Exit code that means a specified host did not exist.

Exit code that means that a required service is unavailable.

Exit code that means an internal software error was detected.

Exit code that means an operating system error was detected, such as the inability to fork or create a pipe.

Exit code that means some system file did not exist, could not be opened, or had some other kind of error.

Exit code that means a user specified output file could not be created.

Exit code that means that an error occurred while doing I/O on some file.

Exit code that means a temporary failure occurred. This indicates something that may not really be an error, such as a network connection that couldn’t be made during a retryable operation.

Exit code that means that a protocol exchange was illegal, invalid, or not understood.

Exit code that means that there were insufficient permissions to perform the operation (but not intended for file system problems).

Exit code that means that some kind of configuration error occurred.

Exit code that means something like “an entry was not found”.

Fork a child process. Return 0 in the child and the child’s process id in the parent. If an error occurs OSError is raised.

Note that some platforms including FreeBSD <= 6.3 and Cygwin have known issues when using fork() from a thread.

Raises an auditing event os.fork with no arguments.

3.8 sürümünde değişti: Calling fork() in a subinterpreter is no longer supported ( RuntimeError is raised).

See ssl for applications that use the SSL module with fork().

Fork a child process, using a new pseudo-terminal as the child’s controlling terminal. Return a pair of (pid, fd) , where pid is 0 in the child, the new child’s process id in the parent, and fd is the file descriptor of the master end of the pseudo-terminal. For a more portable approach, use the pty module. If an error occurs OSError is raised.

Raises an auditing event os.forkpty with no arguments.

3.8 sürümünde değişti: Calling forkpty() in a subinterpreter is no longer supported ( RuntimeError is raised).

Send signal sig to the process pid . Constants for the specific signals available on the host platform are defined in the signal module.

Windows: The signal.CTRL_C_EVENT and signal.CTRL_BREAK_EVENT signals are special signals which can only be sent to console processes which share a common console window, e.g., some subprocesses. Any other value for sig will cause the process to be unconditionally killed by the TerminateProcess API, and the exit code will be set to sig . The Windows version of kill() additionally takes process handles to be killed.

See also signal.pthread_kill() .

Raises an auditing event os.kill with arguments pid , sig .

3.2 sürümüyle geldi: Windows support.

Send the signal sig to the process group pgid .

Raises an auditing event os.killpg with arguments pgid , sig .

Add increment to the process’s “niceness”. Return the new niceness.

Return a file descriptor referring to the process pid . This descriptor can be used to perform process management without races and signals. The flags argument is provided for future extensions; no flag values are currently defined.

See the pidfd_open(2) man page for more details.

Availability : Linux 5.3+

3.9 sürümüyle geldi.

Lock program segments into memory. The value of op (defined in <sys/lock.h> ) determines which segments are locked.

Open a pipe to or from command cmd . The return value is an open file object connected to the pipe, which can be read or written depending on whether mode is 'r' (default) or 'w' . The buffering argument has the same meaning as the corresponding argument to the built-in open() function. The returned file object reads or writes text strings rather than bytes.

The close method returns None if the subprocess exited successfully, or the subprocess’s return code if there was an error. On POSIX systems, if the return code is positive it represents the return value of the process left-shifted by one byte. If the return code is negative, the process was terminated by the signal given by the negated value of the return code. (For example, the return value might be - signal.SIGKILL if the subprocess was killed.) On Windows systems, the return value contains the signed integer return code from the child process.

On Unix, waitstatus_to_exitcode() can be used to convert the close method result (exit status) into an exit code if it is not None . On Windows, the close method result is directly the exit code (or None ).

This is implemented using subprocess.Popen ; see that class’s documentation for more powerful ways to manage and communicate with subprocesses.

Wraps the posix_spawn() C library API for use from Python.

Most users should use subprocess.run() instead of posix_spawn() .

The positional-only arguments path , args , and env are similar to execve() .

The path parameter is the path to the executable file. The path should contain a directory. Use posix_spawnp() to pass an executable file without directory.

The file_actions argument may be a sequence of tuples describing actions to take on specific file descriptors in the child process between the C library implementation’s fork() and exec() steps. The first item in each tuple must be one of the three type indicator listed below describing the remaining tuple elements:

( os.POSIX_SPAWN_OPEN , fd , path , flags , mode )

Performs os.dup2(os.open(path, flags, mode), fd) .

( os.POSIX_SPAWN_CLOSE , fd )

Performs os.close(fd) .

( os.POSIX_SPAWN_DUP2 , fd , new_fd )

Performs os.dup2(fd, new_fd) .

These tuples correspond to the C library posix_spawn_file_actions_addopen() , posix_spawn_file_actions_addclose() , and posix_spawn_file_actions_adddup2() API calls used to prepare for the posix_spawn() call itself.

The setpgroup argument will set the process group of the child to the value specified. If the value specified is 0, the child’s process group ID will be made the same as its process ID. If the value of setpgroup is not set, the child will inherit the parent’s process group ID. This argument corresponds to the C library POSIX_SPAWN_SETPGROUP flag.

If the resetids argument is True it will reset the effective UID and GID of the child to the real UID and GID of the parent process. If the argument is False , then the child retains the effective UID and GID of the parent. In either case, if the set-user-ID and set-group-ID permission bits are enabled on the executable file, their effect will override the setting of the effective UID and GID. This argument corresponds to the C library POSIX_SPAWN_RESETIDS flag.

If the setsid argument is True , it will create a new session ID for posix_spawn . setsid requires POSIX_SPAWN_SETSID or POSIX_SPAWN_SETSID_NP flag. Otherwise, NotImplementedError is raised.

The setsigmask argument will set the signal mask to the signal set specified. If the parameter is not used, then the child inherits the parent’s signal mask. This argument corresponds to the C library POSIX_SPAWN_SETSIGMASK flag.

The sigdef argument will reset the disposition of all signals in the set specified. This argument corresponds to the C library POSIX_SPAWN_SETSIGDEF flag.

The scheduler argument must be a tuple containing the (optional) scheduler policy and an instance of sched_param with the scheduler parameters. A value of None in the place of the scheduler policy indicates that is not being provided. This argument is a combination of the C library POSIX_SPAWN_SETSCHEDPARAM and POSIX_SPAWN_SETSCHEDULER flags.

Raises an auditing event os.posix_spawn with arguments path , argv , env .

Wraps the posix_spawnp() C library API for use from Python.

Similar to posix_spawn() except that the system searches for the executable file in the list of directories specified by the PATH environment variable (in the same way as for execvp(3) ).

Availability : See posix_spawn() documentation.

Register callables to be executed when a new child process is forked using os.fork() or similar process cloning APIs. The parameters are optional and keyword-only. Each specifies a different call point.

before is a function called before forking a child process.

after_in_parent is a function called from the parent process after forking a child process.

after_in_child is a function called from the child process.

These calls are only made if control is expected to return to the Python interpreter. A typical subprocess launch will not trigger them as the child is not going to re-enter the interpreter.

Functions registered for execution before forking are called in reverse registration order. Functions registered for execution after forking (either in the parent or in the child) are called in registration order.

Note that fork() calls made by third-party C code may not call those functions, unless it explicitly calls PyOS_BeforeFork() , PyOS_AfterFork_Parent() and PyOS_AfterFork_Child() .

There is no way to unregister a function.

Execute the program path in a new process.

(Note that the subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using these functions. Check especially the Replacing Older Functions with the subprocess Module section.)

If mode is P_NOWAIT , this function returns the process id of the new process; if mode is P_WAIT , returns the process’s exit code if it exits normally, or -signal , where signal is the signal that killed the process. On Windows, the process id will actually be the process handle, so can be used with the waitpid() function.

Note on VxWorks, this function doesn’t return -signal when the new process is killed. Instead it raises OSError exception.

The “l” and “v” variants of the spawn* functions differ in how command-line arguments are passed. The “l” variants are perhaps the easiest to work with if the number of parameters is fixed when the code is written; the individual parameters simply become additional parameters to the spawnl*() functions. The “v” variants are good when the number of parameters is variable, with the arguments being passed in a list or tuple as the args parameter. In either case, the arguments to the child process must start with the name of the command being run.

The variants which include a second “p” near the end ( spawnlp() , spawnlpe() , spawnvp() , and spawnvpe() ) will use the PATH environment variable to locate the program file . When the environment is being replaced (using one of the spawn*e variants, discussed in the next paragraph), the new environment is used as the source of the PATH variable. The other variants, spawnl() , spawnle() , spawnv() , and spawnve() , will not use the PATH variable to locate the executable; path must contain an appropriate absolute or relative path.

For spawnle() , spawnlpe() , spawnve() , and spawnvpe() (note that these all end in “e”), the env parameter must be a mapping which is used to define the environment variables for the new process (they are used instead of the current process’ environment); the functions spawnl() , spawnlp() , spawnv() , and spawnvp() all cause the new process to inherit the environment of the current process. Note that keys and values in the env dictionary must be strings; invalid keys or values will cause the function to fail, with a return value of 127 .

As an example, the following calls to spawnlp() and spawnvpe() are equivalent:

Raises an auditing event os.spawn with arguments mode , path , args , env .

Availability : Unix, Windows. spawnlp() , spawnlpe() , spawnvp() and spawnvpe() are not available on Windows. spawnle() and spawnve() are not thread-safe on Windows; we advise you to use the subprocess module instead.

Possible values for the mode parameter to the spawn* family of functions. If either of these values is given, the spawn*() functions will return as soon as the new process has been created, with the process id as the return value.

Possible value for the mode parameter to the spawn* family of functions. If this is given as mode , the spawn*() functions will not return until the new process has run to completion and will return the exit code of the process the run is successful, or -signal if a signal kills the process.

Possible values for the mode parameter to the spawn* family of functions. These are less portable than those listed above. P_DETACH is similar to P_NOWAIT , but the new process is detached from the console of the calling process. If P_OVERLAY is used, the current process will be replaced; the spawn* function will not return.

Start a file with its associated application.

When operation is not specified or 'open' , this acts like double-clicking the file in Windows Explorer, or giving the file name as an argument to the start command from the interactive command shell: the file is opened with whatever application (if any) its extension is associated.

When another operation is given, it must be a “command verb” that specifies what should be done with the file. Common verbs documented by Microsoft are 'print' and 'edit' (to be used on files) as well as 'explore' and 'find' (to be used on directories).

startfile() returns as soon as the associated application is launched. There is no option to wait for the application to close, and no way to retrieve the application’s exit status. The path parameter is relative to the current directory. If you want to use an absolute path, make sure the first character is not a slash ( '/' ); the underlying Win32 ShellExecute() function doesn’t work if it is. Use the os.path.normpath() function to ensure that the path is properly encoded for Win32.

To reduce interpreter startup overhead, the Win32 ShellExecute() function is not resolved until this function is first called. If the function cannot be resolved, NotImplementedError will be raised.

Raises an auditing event os.startfile with arguments path , operation .

Execute the command (a string) in a subshell. This is implemented by calling the Standard C function system() , and has the same limitations. Changes to sys.stdin , etc. are not reflected in the environment of the executed command. If command generates any output, it will be sent to the interpreter standard output stream. The C standard does not specify the meaning of the return value of the C function, so the return value of the Python function is system-dependent.

On Unix, the return value is the exit status of the process encoded in the format specified for wait() .

On Windows, the return value is that returned by the system shell after running command . The shell is given by the Windows environment variable COMSPEC : it is usually cmd.exe , which returns the exit status of the command run; on systems using a non-native shell, consult your shell documentation.

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.

On Unix, waitstatus_to_exitcode() can be used to convert the result (exit status) into an exit code. On Windows, the result is directly the exit code.

Raises an auditing event os.system with argument command .

Returns the current global process times. The return value is an object with five attributes:

user - user time

system - system time

children_user - user time of all child processes

children_system - system time of all child processes

elapsed - elapsed real time since a fixed point in the past

For backwards compatibility, this object also behaves like a five-tuple containing user , system , children_user , children_system , and elapsed in that order.

See the Unix manual page times(2) and times(3) manual page on Unix or the GetProcessTimes MSDN on Windows. On Windows, only user and system are known; the other attributes are zero.

Wait for completion of a child process, and return a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced.

waitstatus_to_exitcode() can be used to convert the exit status into an exit code.

waitpid() can be used to wait for the completion of a specific child process and has more options.

Wait for the completion of one or more child processes. idtype can be P_PID , P_PGID , P_ALL , or P_PIDFD on Linux. id specifies the pid to wait on. options is constructed from the ORing of one or more of WEXITED , WSTOPPED or WCONTINUED and additionally may be ORed with WNOHANG or WNOWAIT . The return value is an object representing the data contained in the siginfo_t structure, namely: si_pid , si_uid , si_signo , si_status , si_code or None if WNOHANG is specified and there are no children in a waitable state.

These are the possible values for idtype in waitid() . They affect how id is interpreted.

This is a Linux-specific idtype that indicates that id is a file descriptor that refers to a process.

Availability : Linux 5.4+

Flags that can be used in options in waitid() that specify what child signal to wait for.

These are the possible values for si_code in the result returned by waitid() .

3.9 sürümünde değişti: Added CLD_KILLED and CLD_STOPPED values.

The details of this function differ on Unix and Windows.

On Unix: Wait for completion of a child process given by process id pid , and return a tuple containing its process id and exit status indication (encoded as for wait() ). The semantics of the call are affected by the value of the integer options , which should be 0 for normal operation.

If pid is greater than 0 , waitpid() requests status information for that specific process. If pid is 0 , the request is for the status of any child in the process group of the current process. If pid is -1 , the request pertains to any child of the current process. If pid is less than -1 , status is requested for any process in the process group -pid (the absolute value of pid ).

An OSError is raised with the value of errno when the syscall returns -1.

On Windows: Wait for completion of a process given by process handle pid , and return a tuple containing pid , and its exit status shifted left by 8 bits (shifting makes cross-platform use of the function easier). A pid less than or equal to 0 has no special meaning on Windows, and raises an exception. The value of integer options has no effect. pid can refer to any process whose id is known, not necessarily a child process. The spawn* functions called with P_NOWAIT return suitable process handles.

Similar to waitpid() , except no process id argument is given and a 3-element tuple containing the child’s process id, exit status indication, and resource usage information is returned. Refer to resource . getrusage() for details on resource usage information. The option argument is the same as that provided to waitpid() and wait4() .

waitstatus_to_exitcode() can be used to convert the exit status into an exitcode.

Similar to waitpid() , except a 3-element tuple, containing the child’s process id, exit status indication, and resource usage information is returned. Refer to resource . getrusage() for details on resource usage information. The arguments to wait4() are the same as those provided to waitpid() .

Convert a wait status to an exit code.

If the process exited normally (if WIFEXITED(status) is true), return the process exit status (return WEXITSTATUS(status) ): result greater than or equal to 0.

If the process was terminated by a signal (if WIFSIGNALED(status) is true), return -signum where signum is the number of the signal that caused the process to terminate (return -WTERMSIG(status) ): result less than 0.

Otherwise, raise a ValueError .

On Windows, return status shifted right by 8 bits.

On Unix, if the process is being traced or if waitpid() was called with WUNTRACED option, the caller must first check if WIFSTOPPED(status) is true. This function must not be called if WIFSTOPPED(status) is true.

WIFEXITED() , WEXITSTATUS() , WIFSIGNALED() , WTERMSIG() , WIFSTOPPED() , WSTOPSIG() functions.

The option for waitpid() to return immediately if no child process status is available immediately. The function returns (0, 0) in this case.

This option causes child processes to be reported if they have been continued from a job control stop since their status was last reported.

Availability : some Unix systems.

This option causes child processes to be reported if they have been stopped but their current state has not been reported since they were stopped.

The following functions take a process status code as returned by system() , wait() , or waitpid() as a parameter. They may be used to determine the disposition of a process.

Return True if a core dump was generated for the process, otherwise return False .

This function should be employed only if WIFSIGNALED() is true.

Return True if a stopped child has been resumed by delivery of SIGCONT (if the process has been continued from a job control stop), otherwise return False .

See WCONTINUED option.

Return True if the process was stopped by delivery of a signal, otherwise return False .

WIFSTOPPED() only returns True if the waitpid() call was done using WUNTRACED option or when the process is being traced (see ptrace(2) ).

Return True if the process was terminated by a signal, otherwise return False .

Return True if the process exited terminated normally, that is, by calling exit() or _exit() , or by returning from main() ; otherwise return False .

Return the process exit status.

This function should be employed only if WIFEXITED() is true.

Return the signal which caused the process to stop.

This function should be employed only if WIFSTOPPED() is true.

Return the number of the signal that caused the process to terminate.

Interface to the scheduler ¶

These functions control how a process is allocated CPU time by the operating system. They are only available on some Unix platforms. For more detailed information, consult your Unix manpages.

The following scheduling policies are exposed if they are supported by the operating system.

The default scheduling policy.

Scheduling policy for CPU-intensive processes that tries to preserve interactivity on the rest of the computer.

Scheduling policy for extremely low priority background tasks.

Scheduling policy for sporadic server programs.

A First In First Out scheduling policy.

A round-robin scheduling policy.

This flag can be OR’ed with any other scheduling policy. When a process with this flag set forks, its child’s scheduling policy and priority are reset to the default.

This class represents tunable scheduling parameters used in sched_setparam() , sched_setscheduler() , and sched_getparam() . It is immutable.

At the moment, there is only one possible parameter:

The scheduling priority for a scheduling policy.

Get the minimum priority value for policy . policy is one of the scheduling policy constants above.

Get the maximum priority value for policy . policy is one of the scheduling policy constants above.

Set the scheduling policy for the process with PID pid . A pid of 0 means the calling process. policy is one of the scheduling policy constants above. param is a sched_param instance.

Return the scheduling policy for the process with PID pid . A pid of 0 means the calling process. The result is one of the scheduling policy constants above.

Set the scheduling parameters for the process with PID pid . A pid of 0 means the calling process. param is a sched_param instance.

Return the scheduling parameters as a sched_param instance for the process with PID pid . A pid of 0 means the calling process.

Return the round-robin quantum in seconds for the process with PID pid . A pid of 0 means the calling process.

Voluntarily relinquish the CPU.

Restrict the process with PID pid (or the current process if zero) to a set of CPUs. mask is an iterable of integers representing the set of CPUs to which the process should be restricted.

Return the set of CPUs the process with PID pid (or the current process if zero) is restricted to.

Miscellaneous System Information ¶

Return string-valued system configuration values. name specifies the configuration value to retrieve; it may be a string which is the name of a defined system value; these names are specified in a number of standards (POSIX, Unix 95, Unix 98, and others). Some platforms define additional names as well. The names known to the host operating system are given as the keys of the confstr_names dictionary. For configuration variables not included in that mapping, passing an integer for name is also accepted.

If the configuration value specified by name isn’t defined, None is returned.

If name is a string and is not known, ValueError is raised. If a specific value for name is not supported by the host system, even if it is included in confstr_names , an OSError is raised with errno.EINVAL for the error number.

Dictionary mapping names accepted by confstr() to the integer values defined for those names by the host operating system. This can be used to determine the set of names known to the system.

Return the number of CPUs in the system. Returns None if undetermined.

This number is not equivalent to the number of CPUs the current process can use. The number of usable CPUs can be obtained with len(os.sched_getaffinity(0))

Return the number of processes in the system run queue averaged over the last 1, 5, and 15 minutes or raises OSError if the load average was unobtainable.

Return integer-valued system configuration values. If the configuration value specified by name isn’t defined, -1 is returned. The comments regarding the name parameter for confstr() apply here as well; the dictionary that provides information on the known names is given by sysconf_names .

Dictionary mapping names accepted by sysconf() to the integer values defined for those names by the host operating system. This can be used to determine the set of names known to the system.

The following data values are used to support path manipulation operations. These are defined for all platforms.

Higher-level operations on pathnames are defined in the os.path module.

The constant string used by the operating system to refer to the current directory. This is '.' for Windows and POSIX. Also available via os.path .

The constant string used by the operating system to refer to the parent directory. This is '..' for Windows and POSIX. Also available via os.path .

The character used by the operating system to separate pathname components. This is '/' for POSIX and '\\' for Windows. Note that knowing this is not sufficient to be able to parse or concatenate pathnames — use os.path.split() and os.path.join() — but it is occasionally useful. Also available via os.path .

An alternative character used by the operating system to separate pathname components, or None if only one separator character exists. This is set to '/' on Windows systems where sep is a backslash. Also available via os.path .

The character which separates the base filename from the extension; for example, the '.' in os.py . Also available via os.path .

The character conventionally used by the operating system to separate search path components (as in PATH ), such as ':' for POSIX or ';' for Windows. Also available via os.path .

The default search path used by exec*p* and spawn*p* if the environment doesn’t have a 'PATH' key. Also available via os.path .

The string used to separate (or, rather, terminate) lines on the current platform. This may be a single character, such as '\n' for POSIX, or multiple characters, for example, '\r\n' for Windows. Do not use os.linesep as a line terminator when writing files opened in text mode (the default); use a single '\n' instead, on all platforms.

The file path of the null device. For example: '/dev/null' for POSIX, 'nul' for Windows. Also available via os.path .

Flags for use with the setdlopenflags() and getdlopenflags() functions. See the Unix manual page dlopen(3) for what the different flags mean.

Random numbers ¶

Get up to size random bytes. The function can return less bytes than requested.

These bytes can be used to seed user-space random number generators or for cryptographic purposes.

getrandom() relies on entropy gathered from device drivers and other sources of environmental noise. Unnecessarily reading large quantities of data will have a negative impact on other users of the /dev/random and /dev/urandom devices.

The flags argument is a bit mask that can contain zero or more of the following values ORed together: os.GRND_RANDOM and GRND_NONBLOCK .

See also the Linux getrandom() manual page .

Availability : Linux 3.17 and newer.

Return a bytestring of size random bytes suitable for cryptographic use.

This function returns random bytes from an OS-specific randomness source. The returned data should be unpredictable enough for cryptographic applications, though its exact quality depends on the OS implementation.

On Linux, if the getrandom() syscall is available, it is used in blocking mode: block until the system urandom entropy pool is initialized (128 bits of entropy are collected by the kernel). See the PEP 524 for the rationale. On Linux, the getrandom() function can be used to get random bytes in non-blocking mode (using the GRND_NONBLOCK flag) or to poll until the system urandom entropy pool is initialized.

On a Unix-like system, random bytes are read from the /dev/urandom device. If the /dev/urandom device is not available or not readable, the NotImplementedError exception is raised.

On Windows, it will use CryptGenRandom() .

The secrets module provides higher level functions. For an easy-to-use interface to the random number generator provided by your platform, please see random.SystemRandom .

3.6.0 sürümünde değişti: On Linux, getrandom() is now used in blocking mode to increase the security.

3.5.2 sürümünde değişti: On Linux, if the getrandom() syscall blocks (the urandom entropy pool is not initialized yet), fall back on reading /dev/urandom .

3.5 sürümünde değişti: On Linux 3.17 and newer, the getrandom() syscall is now used when available. On OpenBSD 5.6 and newer, the C getentropy() function is now used. These functions avoid the usage of an internal file descriptor.

By default, when reading from /dev/random , getrandom() blocks if no random bytes are available, and when reading from /dev/urandom , it blocks if the entropy pool has not yet been initialized.

If the GRND_NONBLOCK flag is set, then getrandom() does not block in these cases, but instead immediately raises BlockingIOError .

If this bit is set, then random bytes are drawn from the /dev/random pool instead of the /dev/urandom pool.

Table of Contents

  • File Names, Command Line Arguments, and Environment Variables
  • Process Parameters
  • File Object Creation
  • Querying the size of a terminal
  • Inheritance of File Descriptors
  • Linux extended attributes
  • Process Management
  • Interface to the scheduler
  • Miscellaneous System Information
  • Random numbers

Önceki konu

Generic Operating System Services

Sonraki konu

io — Core tools for working with streams

  • Hata Bildir
  • Kaynağı Göster
  • Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
  • Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand
  • OverflowAI GenAI features for Teams
  • OverflowAPI Train & fine-tune LLMs
  • Labs The future of collective knowledge sharing
  • About the company Visit the blog

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Get early access and see previews of new features.

Python tuple assignment and checking in conditional statements [duplicate]

So I stumbled into a particular behaviour of tuples in python that I was wondering if there is a particular reason for it happening.

While we are perfectly capable of assigning a tuple to a variable without explicitely enclosing it in parentheses:

we are not able to print or check in a conditional if statement the variable containing the tuple in the previous fashion (without explicitely typing the parentheses):

Does anyone why? Thanks in advance and although I didn't find any similar topic please inform me if you think it is a possible dublicate. Cheers, Alex

  • if-statement

Alex Koukoulas's user avatar

  • 1 Essentially commas when used in assignments are what actually create tuples, not the parentheses. However, eq operator is a function that takes only single argument, and when you pass it values separated by commas, it takes it as passing arg rather than passing the tuple 'foo','bar' . Wrapping in parens forces the tuple assignment operator to happen before the evaluation of arg , so it behaves as expected. –  aruisdante Commented Mar 16, 2014 at 0:32
  • 1 To put it another way, if you think of foo_bar_tuple == 'foo','bar' as actually being foo_bar_tuple.__eq__('foo', 'bar') , you can immediately see why you need to wrap in parens to make it work –  aruisdante Commented Mar 16, 2014 at 0:35
  • @aruisdante: Your statement that foo_bar_tuple == 'foo', 'bar' is equivalent to foo_bar_tuple.__eq__('foo', 'bar') is incorrect. The comparison is happening with just the 'foo' string ( foo_bar_tuple.__eq__('foo') , which is False ) and 'bar' is left as a separate expression. –  Blckknght Commented Mar 16, 2014 at 1:01

3 Answers 3

It's because the expressions separated by commas are evaluated before the whole comma-separated tuple (which is an "expression list" in the terminology of the Python grammar). So when you do foo_bar_tuple=="foo", "bar" , that is interpreted as (foo_bar_tuple=="foo"), "bar" . This behavior is described in the documentation .

You can see this if you just write such an expression by itself:

The SyntaxError for the unparenthesized tuple is because an unparenthesized tuple is not an "atom" in the Python grammar, which means it's not valid as the sole content of an if condition. (You can verify this for yourself by tracing around the grammar .)

BrenBarn's user avatar

  • Right! Okay thanks for that mate! I will accept as soon as it will let me :) cheers –  Alex Koukoulas Commented Mar 16, 2014 at 0:37

Considering an example of if 1 == 1,2: which should cause SyntaxError , following the full grammar :

Using the if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] , we get to shift the if keyword and start parsing 1 == 1,2:

For the test rule, only first production matches:

Then we get:

And step down into and_test :

Here we just step into not_test at the moment:

Notice, our input is 1 == 1,2: , thus the first production doesn't match and we check the other one: (1)

Continuing on stepping down (we take the only the first non-terminal as the zero-or-more star requires a terminal we don't have at all in our input):

Now we use the power production:

And shift NUMBER ( 1 in our input) and reduce. Now we are back at (1) with input ==1,2: to parse. == matches comp_op :

So we shift it and reduce, leaving us with input 1,2: (current parsing output is NUMBER comp_op , we need to match expr now). We repeat the process for the left-hand side, going straight to the atom nonterminal and selecting the NUMBER production. Shift and reduce.

Since , does not match any comp_op we reduce the test non-terminal and receive 'if' NUMBER comp_op NUMBER . We need to match else , elif or : now, but we have , so we fail with SyntaxError .

Maciej Gol's user avatar

I think the operator precedence table summarizes this nicely:

You'll see that comparisons come before expressions, which are actually dead last.

Aaron Hall's user avatar

Not the answer you're looking for? Browse other questions tagged python python-2.7 if-statement tuples or ask your own question .

  • The Overflow Blog
  • The evolution of full stack engineers
  • One of the best ways to get value for AI coding tools: generating tests
  • Featured on Meta
  • User activation: Learnings and opportunities
  • Site maintenance - Mon, Sept 16 2024, 21:00 UTC to Tue, Sept 17 2024, 2:00...
  • Staging Ground Reviewer Motivation
  • What does a new user need in a homepage experience on Stack Overflow?

Hot Network Questions

  • Is there mathematical significance to the LaGuardia floor tiles?
  • What came of the Trump campaign's complaint to the FEC that Harris 'stole' (or at least illegally received) Biden's funding?
  • Why is resonance such a widespread phenomenon?
  • Why is the area covered by 1 steradian (in a sphere) circular in shape?
  • Please help me identify my Dad's bike collection (80's-2000's)
  • Navigating career options after a disastrous PhD performance and a disappointed advisor?
  • What would the natural diet of Bigfoot be?
  • What about the other 35 children who were born in the same manner in The Umbrella Academy. Do we hear what happened to them in the comic or TV Show?
  • 4/4 time change to 6/8 time
  • How did NASA figure out when and where the Apollo capsule would touch down on the ocean?
  • Why does documentation take a large storage?
  • What film is it where the antagonist uses an expandable triple knife?
  • "Truth Function" v.s. "Truth-Functional"
  • Why was Panama Railroad in poor condition when US decided to build Panama Canal in 1904?
  • Why the \VerbatimInput of the .aux file is empty?
  • What is the shortest viable hmac for non-critical applications?
  • Second cohomology of nonabelian finite simple group vanishes.
  • Book that features clones used for retirement
  • Is it a correct rendering of Acts 1,24 when the New World Translation puts in „Jehovah“ instead of Lord?
  • Why are my empty files not being assigned the correct mimetype?
  • How can I analyze the anatomy of a humanoid species to create sounds for their language?
  • Should I write an email to a Latino teacher working in the US in English or Spanish?
  • Big Transition of Binary Counting in perspective of IEEE754 floating point
  • How do elected politicians get away with not giving straight answers?

python tuple variable assignment

IMAGES

  1. Python tuple()

    python tuple variable assignment

  2. Python Variables with examples

    python tuple variable assignment

  3. Working with Tuples in Python

    python tuple variable assignment

  4. Tuple Unpacking in Python

    python tuple variable assignment

  5. Learn Python Programming Tutorial 4

    python tuple variable assignment

  6. Tuple in Python

    python tuple variable assignment

VIDEO

  1. Python-v44-tuple-assignment

  2. introduction to python tuple #14

  3. P40

  4. MODULE-9 PYTHON TUPLE

  5. Python For Beginners : Variable (हिंदी में)

  6. Unpacking a Tuple in Python

COMMENTS

  1. python: multiple variables using tuple

    The way you used the tuple was only to assign the single values to single variables in one line. This doesn't store the tuple anywhere, so you'll be left with 4 variables with 4 different values. When you change the value of country, you change the value of this single variable, not of the tuple, as string variables are "call by value" in python.

  2. 11.3. Tuple Assignment

    Tuple Assignment — Python for Everybody - Interactive. 11.3. Tuple Assignment ¶. One of the unique syntactic features of Python is the ability to have a tuple on the left side of an assignment statement. This allows you to assign more than one variable at a time when the left side is a sequence. In this example we have a two-element list ...

  3. Tuple Assignment: Introduction, Tuple Packing and Examples

    Besides tuple assignment is a special feature in python. We also call this feature unpacking of tuple. The process of assigning values to a tuple is known as packing. While on the other hand, the unpacking or tuple assignment is the process that assigns the values on the right-hand side to the left-hand side variables.

  4. Tuple Assignment, Packing, and Unpacking

    00:00 In this video, I'm going to show you tuple assignment through packing and unpacking. A literal tuple containing several items can be assigned to a single object, such as the example object here, t. 00:16 Assigning that packed object to a new tuple, unpacks the individual items into the objects in that new tuple. When unpacking, the number of variables on the left have to match the ...

  5. Python's tuple Data Type: A Deep Dive With Examples

    Getting Started With Python's tuple Data Type. The built-in tuple data type is probably the most elementary sequence available in Python. Tuples are immutable and can store a fixed number of items. For example, you can use tuples to represent Cartesian coordinates (x, y), RGB colors (red, green, blue), records in a database table (name, age, job), and many other sequences of values.

  6. Python Tuple: How to Create, Use, and Convert

    Python lists are mutable, while tuples are not. If you need to, you can convert a tuple to a list with one of the following methods. The cleanest and most readable way is to use the list() constructor: >>> t = 1, 2, 3. >>> list(t) [1, 2, 3] A more concise but less readable method is to use unpacking.

  7. 10.28. Tuple Assignment

    10.28. Tuple Assignment ¶. Python has a very powerful tuple assignment feature that allows a tuple of variables on the left of an assignment to be assigned values from a tuple on the right of the assignment. This does the equivalent of seven assignment statements, all on one easy line. One requirement is that the number of variables on the ...

  8. 10.3: Tuple Assignment

    10.3: Tuple Assignment. One of the unique syntactic features of the Python language is the ability to have a tuple on the left side of an assignment statement. This allows you to assign more than one variable at a time when the left side is a sequence. In this example we have a two-element list (which is a sequence) and assign the first and ...

  9. Guide to Tuples in Python

    # Unpack a tuple into variables my_tuple = (1, 2, 3) a, b, c = my_tuple print (a) # Output: 1 print (b) # Output: 2 print (c) # Output: 3 Tuple Methods. In addition to the basic operations that you can perform on tuples, there are also several built-in methods that are available for working with tuples in Python.

  10. 13.3. Tuple Assignment with Unpacking

    13.3. Tuple Assignment with Unpacking ¶. Python has a very powerful tuple assignment feature that allows a tuple of variable names on the left of an assignment statement to be assigned values from a tuple on the right of the assignment. Another way to think of this is that the tuple of values is unpacked into the variable names.

  11. Tuple Assignment Python [With Examples]

    Here are some examples of tuple assignment in Python: Example 1: Basic Tuple Assignment. # Creating a tuple. coordinates = (3, 4) # Unpacking the tuple into two variables. x, y = coordinates. # Now, x is 3, and y is 4 Code language: Python (python) Example 2: Multiple Variables Assigned at Once. # Creating a tuple.

  12. Unpacking in Python: Beyond Parallel Assignment

    Introduction. Unpacking in Python refers to an operation that consists of assigning an iterable of values to a tuple (or list) of variables in a single assignment statement.As a complement, the term packing can be used when we collect several values in a single variable using the iterable unpacking operator, *.. Historically, Python developers have generically referred to this kind of ...

  13. Python Unpacking Tuples By Examples

    Unpacking a tuple means splitting the tuple's elements into individual variables. For example: x, y = (1, 2) Code language: Python (python) The left side: x, y Code language: Python (python) is a tuple of two variables x and y. The right side is also a tuple of two integers 1 and 2. The expression assigns the tuple elements on the right side ...

  14. Python Tuple (With Examples)

    In this article, we'll learn about Python Tuples with the help of examples. 36% off. Learn to code solving ... Python Variables and Literals; Python Type Conversion; Python Basic Input and Output ... (fruits) # Output: TypeError: 'tuple' object does not support item assignment. Delete Tuples We cannot delete individual items of a tuple. However ...

  15. Python's Assignment Operator: Write Robust Assignments

    Here, variable represents a generic Python variable, while expression represents any Python object that you can provide as a concrete value—also known as a literal—or an expression that evaluates to a value. To execute an assignment statement like the above, Python runs the following steps: Evaluate the right-hand expression to produce a concrete value or object.

  16. Unpacking a Tuple in Python

    Packing and Unpacking a Tuple: In Python, there is a very powerful tuple assignment feature that assigns the right-hand side of values into the left-hand side. In another way, it is called unpacking of a tuple of values into a variable. In packing, we put values into a new tuple while in unpacking we extract those values into a single variable.

  17. tuples

    32. Python does not have a "comma operator" as in C. Instead, the comma indicates that a tuple should be constructed. The right-hand side of. a, b = a + b, a. is a tuple with th two items a + b and a. On the left-hand side of an assignment, the comma indicates that sequence unpacking should be performed according to the rules you quoted: a will ...

  18. python

    However, tuples in Python are immutable, so you cannot append variables to a tuple once it is created. Share. Improve this answer. Follow answered Sep 4, 2009 at 18:39. mipadi mipadi. 408k 90 90 gold badges 529 529 silver badges 485 485 bronze badges. Add a comment | 9 ...

  19. Python Tuples

    Tuple. Tuples are used to store multiple items in a single variable. Tuple is one of 4 built-in data types in Python used to store collections of data, the other 3 are List, Set, and Dictionary, all with different qualities and usage.. A tuple is a collection which is ordered and unchangeable.. Tuples are written with round brackets.

  20. 10.3: Tuple Assignment

    10.3: Tuple Assignment. One of the unique syntactic features of the Python language is the ability to have a tuple on the left side of an assignment statement. This allows you to assign more than one variable at a time when the left side is a sequence. In this example we have a two-element list (which is a sequence) and assign the first and ...

  21. python

    Example 1 (Swapping) Tuple assignment can be very handy in order to swap the contents of variables. The following example shows how we can swap the contents of two elements in an array in a clear an concise way without the need of temporary variables:

  22. Pythonのタプル(tuple)の使い方・タプルの操作・例題について

    タプルとは. Pythonのタプル (tuple) は、順序付けられた変更不可のデータの集合を表すデータ型です。リスト (list) に似ていますが、タプルは一度作成すると要素を追加、削除、変更することができません。

  23. os

    This module provides a portable way of using operating system dependent functionality. If you just want to read or write a file see open(), if you want to manipulate paths, see the os.path module, and if you want to read all the lines in all the files on the command line see the fileinput module. For creating temporary files and directories see the tempfile module, and for high-level file and ...

  24. Assigning variables to multiple 2-tuples returned from a function in Python

    If you are only interested in (y1, y2) and would like to ignore the other elements of the tuple, a general convention is to use _ (as a throw away variable name) _, (y1, y2), _ = function(x, y, z, t) another option is to just store the value in a variable and then index it appropriately. value = function(x, y, z, t)

  25. Python tuple assignment and checking in conditional statements

    While we are perfectly capable of assigning a tuple to a variable without explicitely enclosing it in parentheses: >>> foo_bar_tuple = "foo","bar". >>>. we are not able to print or check in a conditional if statement the variable containing the tuple in the previous fashion (without explicitely typing the parentheses): >>> print foo_bar_tuple ...