Dataflows – Common issues, limitations & recommendations

With problems come solutions & with new features comes new bugs. Dataflows still being new to the Power Platform family, some of us have already researched & played with them. We encountered issues too. This blog is all about some issues, limitations & recommendations for Dataflows seen as of writing this blog.

Invalid credentials. (Session ID: XXXX)

  • If you have entered the wrong URL, username, or password, you will get this error of Invalid credentials.
  • Another reason could be wrong source selection like SharePoint Online List source & SharePoint List source.

Unexpected result type of schema evaluation. Expected: “Table”. Actual: “Challenge”.

  • There are many possible reasons for this, but in my case, I had multiple columns with value as [Record]. So, I removed unnecessary columns, and it was loaded successfully for mapping.
  • There is already a thread on the Power Platform community, read here
  • If you face this issue & none of the ways are working, raise a support ticket. I have documented the steps here

Your settings are invalid.

  • If your source does not have a date-time column, you can’t set incremental refresh, hence the error. Please make sure you have a date time column for the same to have updated data.

Duplicate data is present in your destination table –

  • You might have run the same dataflow again but instead of a refresh, you got the same data? Check your primary key value, I had selected GUID in my case hence the error. Each time I refreshed, a new GUID was created. In other words, it was simply telling me ‘Don’t mess with me!’

The remote server returned an error: (500) Internal Server Error. (The query cannot be completed because the number of lookup columns it contains exceeds the lookup column threshold enforced by the administrator.)

  • There can only be twelve (12) lookup columns in the table that you are migrating. If there are more, you will run into this error. You must re-prioritize the columns that you want to migrate.

Destination column type –

  • Currently, If you want to load data to a new table, the destination column type is restricted to Text or Multiline text. Hence, create your table with the required column type first & then map it via dataflow.

Destination –

  • If you are using dataflow, the destination will be the Dataverse table (system table or custom table). You cannot change the destination. In that case, use another ETL tool or Azure Data Factory.

Recommendations –

  • Avoid refreshing data every 15 minutes. Instead have a couple of hours – 24 hrs. gap. [This is purely experience & dataflow can refresh data as low as 1 minute]
  • Choose Incremental refresh where you have data dependency from multiple sources or calculated columns. Read more here

  • Understand the difference between delta load of data & full update of data in a table. The latter will drop the existing data & bring in fresh data. The former will only bring data after a particular interval of time (as defined in dataflow) keeping the existing data. It is more like an append function.
  • If you have huge data in your table, avoid using Dataflow, you may run into API limits. Read more here
  • Data migration in dataflow is one way, that is the source to your Dataverse table. So, use dataflow for once-off data migration which is a little or less complex transformation

Note – This blog will keep evolving with time.  Also, the features are still improving, the product group listens to your feedback.

Read about Dataflows in my earlier blog here

Keep reading, keep enjoying, keep exploring!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s