Python for Automation in Azure DevOps
By Anatoly Mironov
I recently discussed how Azure DevOps Work Items—particularly the Completed field—could be better utilized to aggregate a team’s burnup.
Suppose each project includes a backlog item or user story for time reporting, with every team member assigned a new task each week.
This discussion triggered me to learn more about the REST APIs in Azure DevOps, how to execute and how to automate the calls.
Hypothetical goal and Prompting
My hypothetical goal is to update the Completed field of “my” task (work item) and set it to 10, every Friday noon, completely automated. Let’s pretend I always work 10 hours a week in that project. All the deviations are okay to handle manually. My programming language of choice in this case is Python, because I feel like I want to practice more in Python.
I have prompted AI to get the bulk of the code and it worked pretty good. If you are curious, I have documented my prompts in a gist.
Automation Options Considered
The key is automation. I started by creating code in a python notebook. It worked well locally. So I considered my options for automation:
Running a scheduled task locally on my laptop doesn’t qualify as true automation, so I ruled that out
Power Automate. Theoretically I could achieve my goal, since they are connectors, but I can anticipate some problems:
- I can anticipate how “dirty” (badly maintainable) some filter and extract operations can become, compared to the clean “pro dev” way in Python.
- Azure DevOps actions are Power Automate Premium. While obtaining such a license is possible, I prefer not to depend on it for my automation setup.
Azure Function, Azure Automation, Azure Container App. A solution deployed to Azure is a good way, especially an Container App, but in my case, the solution is more “lightweight”, more towards “personal productivity”. Besides that, it would require a Service Principal (Managed Identity) and permissions which are hard to get nowadays.
Azure Pipelines for the win
Then there’s Azure Pipelines, which fits perfectly into my hypothetical scenario. You can see the details of the solution down below. It is my choice in this case. The advantages are:
- It is in the same system as the work items, so no “integration” headache,
- No extra Service Principals, no extra permissions
- It runs on an existing infrastructure in Azure DevOps, on build agents
- A lot of opportunities for further customization of the triggers and actions
- The python file executed is no different from what I can run locally or have somewhere else. The Python code can be further developed to include additional steps such as data analysis and even generating and publishing burndown or burnup reports and charts.
Findings and thoughts
I have been using AI to generate the bulk of the code, although I had to make it work as a whole and come up with the idea of hosting it as an Azure Pipeline.
One concern I have is that Personal Access Tokens (PATs) remain the default recommendation in many official documents and online resources, for example in azure-devops-python-api. This is what you’ll get even when you ask AI to generate code. There are better authentication methods. Locally I use DefaultAzureCredential.
The API Versioning in Azure DevOps REST APIs are good for predictability but it’s not elegant.
There is a python package for Azure DevOps: azure-devops-python-api. I think I would have chosen it if I wrote my entire code by myself, the fact that AI could generate the “big” mass of code and it worked, didn’t trigger the developer laziness in me. A positive fact of not using an API wrapper (which the python package is) is that there is less can break. The bad thing is that my query calls have more lines of code, which in bigger scale impacts the maintainability in a negative way.
In order to call the api, the access token needs to be passed as an environment variable in the python environment where the script gets executed: SYSTEM_ACCESSTOKEN: $(System.AccessToken).
AI-generated solutions can sometimes be computationally expensive. Copilot or other code generators take your prompt and deliver a working solution. But it’s not necessarily the most efficient solution in terms of computation, cost and climate impact. In this case, it gets the entire list of child work items and then filters them locally, yes it succeeds, but let’s say there were a thousand of child items, what if there were paging or throttling. I as a developer have still the responsibility to cover those scenarios and make sure it does not utilize too much CPU power or bandwidth. It’s just a thought — perhaps better and consecutive prompting could improve the solution, but at the very least, developers should remain aware of these trade-offs. I imagine that in future code reviews, we’ll need to focus more on architectural qualities rather than just whether the code functions.
Final Solution with Azure Pipelines
Although I published this as a gist on GitHub, I am embedding them in the end of this post for reference. The solution consists of three files.
azure-pipelines.yaml orchestrates the code execution.
| trigger: none | |
| schedules: | |
| - cron: "0 12 * * 5" | |
| displayName: Weekly Friday Report | |
| branches: | |
| include: | |
| - main | |
| always: "true" | |
| pool: | |
| vmImage: ubuntu-latest | |
| stages: | |
| - stage: RunReport | |
| jobs: | |
| - job: RunReportJob | |
| steps: | |
| - task: UsePythonVersion@0 | |
| inputs: | |
| versionSpec: '3.x' | |
| addToPath: true | |
| displayName: 'Set up Python' | |
| - script: | | |
| pip install -r requirements.txt | |
| displayName: 'Install dependencies' | |
| - script: | | |
| python report-time.py | |
| displayName: 'Run report.py' | |
| env: | |
| SYSTEM_ACCESSTOKEN: $(System.AccessToken) |
requirements.txt is a list of dependencies, in this case just one, but it may be more in the future.
| requests |
report-time.py is the “heart” of the solution.
| import os | |
| import requests | |
| import json | |
| import datetime | |
| token = os.getenv("SYSTEM_ACCESSTOKEN") | |
| headers = { | |
| 'Authorization': f'Bearer {token}', | |
| 'Content-Type': 'application/json' | |
| } | |
| print(f'token {token[:10]}...') | |
| # Azure DevOps configuration | |
| organization = "tolle" # Replace with your organization name | |
| project = "project1" # Replace with your project name | |
| userstory_id = 1456 # User story ID to query | |
| me = "anatoly" | |
| hours_to_add = 10 | |
| # Get work items that are children of user story 124686 | |
| base_url = f"https://dev.azure.com/{organization}/{project}/_apis/wit" | |
| query_url = f"{base_url}/wiql?api-version=7.0" | |
| wiql_query = { | |
| "query": f"SELECT [System.Id], [System.Title], [System.WorkItemType] FROM WorkItemLinks WHERE ([Source].[System.Id] = {userstory_id}) AND ([System.Links.LinkType] = 'System.LinkTypes.Hierarchy-Forward') MODE (Recursive)" | |
| } | |
| # Execute the query | |
| query_response = requests.post(query_url, headers=headers, json=wiql_query) | |
| query_result = query_response.json() | |
| print(json.dumps(query_result, indent=2)) | |
| # Extract work item IDs from the query result | |
| work_item_ids = [] | |
| if 'workItemRelations' in query_result: | |
| for relation in query_result['workItemRelations']: | |
| if relation.get('target'): | |
| work_item_ids.append(relation['target']['id']) | |
| print(f"Found {len(work_item_ids)} child work items") | |
| # Get detailed information for each work item | |
| if work_item_ids: | |
| ids_string = ",".join(map(str, work_item_ids)) | |
| details_url = f"{base_url}/workitems?ids={ids_string}&api-version=7.0" | |
| details_response = requests.get(details_url, headers=headers) | |
| work_items = details_response.json()['value'] | |
| # Find the task with "anatoly" in its name | |
| anatoly_task = None | |
| for item in work_items: | |
| # Get current week number in Sweden (ISO week) | |
| current_week = datetime.date.today().isocalendar()[1] | |
| week_format = f"v{current_week}" | |
| title = item['fields'].get('System.Title', '').lower() | |
| if 'anatoly' in title and week_format in title and item['fields'].get('System.WorkItemType') == 'Task': | |
| anatoly_task = item | |
| break | |
| else: | |
| print(f"No child work items found for user story {userstory_id}") | |
| if anatoly_task: | |
| print(json.dumps(anatoly_task, indent=2)) | |
| print(anatoly_task['fields']['System.Title']) | |
| # Create a copy of headers with updated Content-Type for patch operations | |
| patch_headers = headers.copy() | |
| patch_headers['Content-Type'] = 'application/json-patch+json' | |
| if anatoly_task: | |
| task_id = anatoly_task['id'] | |
| print(f"Found Anatoly's task: {anatoly_task['fields']['System.Title']} (ID: {task_id})") | |
| # Update the "Completed" field with 10 hours | |
| update_url = f"{base_url}/workitems/{task_id}?api-version=7.0" | |
| update_data = [ | |
| { | |
| "op": "add", | |
| "path": "/fields/Microsoft.VSTS.Scheduling.CompletedWork", | |
| "value": hours_to_add | |
| } | |
| ] | |
| update_response = requests.patch(update_url, headers=patch_headers, json=update_data) | |
| if update_response.status_code == 200: | |
| print(f"Successfully updated Completed Work to {hours_to_add} hours") | |
| updated_task = update_response.json() | |
| print(f"Updated task: {updated_task['fields']['System.Title']}") | |
| print(f"Completed Work: {updated_task['fields'].get('Microsoft.VSTS.Scheduling.CompletedWork', 'Not set')}") | |
| else: | |
| print(f"Failed to update task. Status code: {update_response.status_code}") | |
| print(f"Error: {update_response.text}") | |
| else: | |
| print(f"No task with 'anatoly' in the name found among the children of user story {userstory_id}") |