Share your ideas and vote for future features
Suggest an idea
Options
- Mark all as New
- Mark all as Read
- Float this item to the top
- Subscribe
- Bookmark
- Subscribe to RSS Feed
Submitted
yesterday
Submitted by
v-hnishikawa
yesterday

I would like to be able to grant write permissions on a directory-by-directory basis to individual users or target groups in OneLake's "Manage OneLake Data Access." Currently, "Manage OneLake Data Access" only has settings for "Read" and "ReadAll." The roles that are automatically granted write permissions are the "Workspace Admin Role," "Member Role," and "Contributor Role." However, as the above are write permissions for the entire lake house, please add a function to grant write permissions on a directory-by-directory basis.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
v-taikawa
yesterday

In the notification settings for schedule updates, I would like to be able to set notification settings other than email. For example, notification settings to an external tool such as a scheduler that does not require manual confirmation and can be managed by automatic monitoring.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
v-tkamiya
yesterday

When exporting a report as a PDF in the Power BI service, I would like to be able to export the report keeping the original font. If a font is replaced by another font, I would like to be able to export it in my country's font, not in a different country's font.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
v-taikawa
Monday

When exporting to an Excel file with “Data in current layout” in the report visual, I do not want the row with the filter content to appear in the bottom row.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
v-nandagm
yesterday

Need API for Power BI gateway creation for Sql connections while using authentication method – service principals
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
frithjof_v
yesterday

For low code users, it would be awesome if there was a Data Pipeline activity that can be used to run Optimize on a table in a Lakehouse (or all tables in a Lakehouse).
For example, when using Copy Activity or Dataflow Gen2 to append data to a Lakehouse table, the tables need to get optimized (compacted) after x runs. But there is no automated, low-code way to do it. So users forget to optimize the table in the destination Lakehouse.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
BI-Nomad
yesterday

There is no way to monitor the load times at the moment for paginated Reports as the Load times of Paginated Reports are not integrated into the Usage Metrics. Idea: Please incorporate the Load Times of Paginated Reports into Usage metrics or provide another way to monitor these times
... View more
See more ideas labeled with:
Submitted
Sunday
Submitted by
v-tkamiya
Sunday

I would like a style slicer that pops up a calendar that allows selection by a single date, rather than a range of dates.
... View more
See more ideas labeled with:
Submitted
Friday
Submitted by
sajjadniazi
Friday

Problem: Currently, when a dataset is connected to a Lakehouse as a datasource in Power BI Fabric, it defaults to a cloud connection mapped to SSO. In embedded mode, reports built on these datasets fail due to a lack of identity, as they do not inherit authentication from the service principal. To resolve this, users must manually adjust the datasource settings via the Power BI service https://learn.microsoft.com/en-us/fabric/fundamentals/direct-lake-fixed-identity Proposed Solution: Introduce a new REST API or extend the current API functionality to allow programmatic setting (or re-setting) of the connection to the service principal, including updating datasources. Benefits: Automates the process, reducing manual intervention Minimizes downtime for embedded reports Enhances developer experience and deployment efficiency Ensures consistency in authentication settings across different environments Impact: This feature will significantly improve the workflow for organizations embedding Power BI reports using service principals, ensuring seamless and automated datasource authentication.
... View more
See more ideas labeled with:
Submitted
13 hours ago
Submitted by
koestlerd
13 hours ago
There should be a new "Last Updated" Timestamp column in Synapse Data Science when viewing your created notebooks. This would allow users to see when others in their team have last made changes to their notebooks to see what the most up to date working notebook is.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

The current card visual forces users to overlap elements or waste copious amounts of time creating custom visuals. The new card feature should give users the ability to create multiple cards in a single container and provide a greater level of customization.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

It would be beneficial to incorporate features from Pivot tables that allow for the expansion and collapse of columns and hierarchical column groups within tabular visuals. This would not only solve the current limitations of matrices but also provide report creators with the flexibility to hide and show rows and columns, saving these settings for future use, thus eliminating the need to scroll through irrelevant data.
... View more
See more ideas labeled with:
Submitted
yesterday
Submitted by
frithjof_v
yesterday

For low code users, it would be awesome if there was a Data Pipeline activity that can be used to run Vacuum of a table in a Lakehouse (or all tables in a Lakehouse).
For example, when using Copy Activity or Dataflow Gen2 to write to a Lakehouse table, the tables need to get vacuumed. But there is no automated, low-code way to do it. So users forget to vacuum the destination Lakehouse.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
jovanpop-msft
Monday

Fabric Warehouse currently supports the OPENROWSET function, which allows you to read CSV files directly. However, when it comes to reading JSON files, you need to use a workaround. This involves treating JSON files as text or CSV files and then using OPENJSON and JSON functions to parse the JSON documents. While this method works, it is considered a hacky workaround and not very elegant. To improve this, Fabric Warehouse should add a native support for JSON in the OPENROWSET function. This would eliminate the need for tricks with OPENJSON and JSON functions, making the process of reading JSON files more straightforward and elegan. The current workaround for reading json is: SELECT
TRY_CAST(JSON_VALUE(jsonContent,'$.orderId') AS INT) AS orderId,
JSON_VALUE(jsonContent,'$.orderDate') AS orderDate,
JSON_QUERY(jsonContent,'$.orderDetails') AS orderDetailsObject,
JSON_QUERY(jsonContent,'$.deliveryAddress') AS deliveryAddressArray
FROM
OPENROWSET(
BULK 'https://<storage>.dfs.core.windows.net/datalakehouseuk/raw/json/orders.json',
FORMAT = 'CSV',
FIELDTERMINATOR ='0x0b',
FIELDQUOTE = '0x0b'
)
WITH (
jsonContent VARCHAR(500)
) AS r;
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

Enabling customized calculations at the query level for subtotals and grand totals would offer greater flexibility in reporting and preserve performance. Efficient organization of control settings to modify the style of these totals separately will empower report creators to achieve their desired appearance, while addressing their need for more control and customization in reporting.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

Imagine a world where report creators can automatically apply slicer and filter selections based on specific logic, revolutionizing data analysis and user experience. This innovative approach eliminates any need for complex workarounds, optimizes slicer functionality, and paves the way for more efficient and effective data reporting.
... View more
See more ideas labeled with:
Submitted
Monday
Submitted by
timothyeharris
Monday

I have three workspaces -- DEV, UAT, and Prod. Within these workspaces, I have several independent projects - each with their own deployment schedule and processes. The current pipeline process makes it nearly impossible to deploy these projects independently of one another because a human being has to rebuild the pipeline for every deployment. Not only that, but the difficulty in rebinding notebooks, semantic models, etc as part of the deployment makes the whole pipeline process a non-starter. It would make the deployment pipeline process much easier/better if I could: 1) specify what data pipelines, data flows, notebooks, semantic models, reports, environments, and lakehouses need to move between workspaces 2) specify what data within lakehouses needs to be copied 3) rebind any notebooks, semantic models, and reports In short, I want a pipeline package for each project that I can define within the source workspace(s)
... View more
See more ideas labeled with:
Submitted
Friday
Submitted by
frithjof_v
Friday

Make it possible to write SparkSQL without having a Default Lakehouse.
With 3- or 4- part naming, i.e.
[workspace].[lakehouse].[schema].[table]
there should be no need to attach a Lakehouse in order to use SparkSQL.
Needing to attach a Lakehouse is annoying and adds extra complexity.
... View more
See more ideas labeled with:
Submitted on
10-07-2024
10:00 PM
Submitted by
Miguel_Myers
on
10-07-2024
10:00 PM

Interpreting visuals without a clear legend to indicate logic behind specific styles can lead to confusion and decision-making errors. An idea to enhance clarity and transparency by ensuring legends and tooltips accurately display colors, patterns, and other visual components influenced by logics, would enable report consumers to easily understand the applied logic and make more effective decisions.
... View more
See more ideas labeled with:
Submitted
Friday
Submitted by
v-velagalasr1
Friday

"Copy visual as image" feature available for the report in the Power BI service where as it is not available for embedded report. Need to implement "Copy visual as image" feature in the embedded report
... View more
See more ideas labeled with:
Idea Statuses
- New 14,916
- Need Clarification 5
- Needs Votes 22,631
- Under Review 638
- Planned 267
- Completed 1,649
- Declined 221
Helpful resources
Latest Comments
-
Jonathan_Garris on: Capacity level calendar of scheduled jobs
-
miguel on: Need "deployment packages" for deployment pipeline...
-
Jonathan_Garris on: Fabric Built-in Roles (RBAC)
-
timothyeharris on: Improve Workspace Visibility in Microsoft Fabric
- galaeci on: Fabric Desktop: A Local Development Experience for...
- BHouston1 on: Use SparkSQL without Default Lakehouse
-
Chris_Novak1 on: Email notification on Interactive Delay
- cduden on: REST API Support for Setting Lakehouse Datasource ...
-
miguel on: Kusto Backfill for Incremental Refresh
-
timothyeharris on: Overview of all Item permissions in a Workspace
-
Power BI
38,676 -
Fabric platform
529 -
Data Factory
445 -
Data Factory | Data Pipeline
284 -
Data Engineering
255 -
Data Warehouse
182 -
Data Factory | Dataflow
150 -
Real-Time Intelligence
127 -
Fabric platform | OneLake
112 -
Fabric platform | Workspaces
112 -
Fabric platform | Admin
106 -
Fabric platform | CICD
87 -
Fabric platform | Capacities
63 -
Real-Time Intelligence | Eventhouse and KQL
61 -
Real-Time Intelligence | Activator
52 -
Fabric platform | Governance
47 -
Fabric platform | Security
44 -
Data Science
43 -
Data Factory | Mirroring
37 -
Fabric platform | Support
30 -
Real-Time Intelligence | Eventstream
30 -
Databases | SQL Database
29 -
Fabric platform | Data hub
28 -
Databases
21 -
Data Factory | Apache Airflow Job
3 -
Product
2 -
Fabric platform | Real-Time hub
1 -
Real-Time Hub
1