Table Sizes Optimized in Query
Hi Experts,
When I was run the query, I am not getting the data in the query but when I checked the data in the cube, the data is available there. I went RSRT there I checked the technical information in that <b>Table Sizes Optimized is showing RED</b>. What is this error How to solve this problem?
Thanks in Advance
Ravi.
Hi Ravi,
What was the fix for this issue ?
Appreciate your help.
Thanks
Similar Messages
-
Table size effect on query performance
I know this sounds like a very generic question, but, how much does table size affect the performance of a query?
This is a rather unusual case actually. I am running a query on two tables, say, Table1 and Table2. Table1 is roughly 1 million record. Whereas for Table2, I tried using different number of records.
The resultant query returns 150,000 records. If I keep Table2 to 500 records, the query execution time takes 2 minutes. But, if I increase Table2 to 8,000 records, it would take close to 20 minutes!
I have checked the "Explain plan" statement and note that the indexes for the columns used for joining the two tables are being used.
Is it normal for table size to have such a big effect on performance time, even when number of records is under the 10,000 range?
Really appreciate your inputs. Thanks in advance.Did you update your statistics when you changed the size of Table2? The CBO will probably choose different plans as the size of Table2 changes. If it thinks there are many more or fewer rows, you're likely to have performance issues.
Justin -
DB Query to get table sizes and classification in OIM Schema
My customer's OIM Production DB size has gone upto 300 gb. They want to know why. Like they want to know "what kind of data" is the reason for such a large size of DB. Is there a way to find out, from OIM schema, what are sizes for each table (like ACT, USR etc) and classify them into User Data, Config Data, Audit Data, Recon Data, etc?
Any help is very much appreciated in this regard.
Regards
VinayYou can categorize tables using information from below link:
http://rajnishbhatia19.blogspot.in/2008/08/oim-tables-descriptions-9011.html
You can count number of rows for tables using:
select count(*) from tablename;
Find major tables whose size is to be calculated and find avg length of a row (by adding attribute lengths defined).
Finally, calculate the table size using below query:
select TABLE_NAME, ROUND((AVG_ROW_LEN * NUM_ROWS / 1024), 2) SIZE_KB from USER_TABLES order by TABLE_NAME;
regards,
GP -
Give me the sql query which calculte the table size in oracle 10g ecc 6.0
Hi expert,
Please give me the sql query which calculte the table size in oracle 10g ecc 6.0.
RegardsOrkun Gedik wrote:
select segment_name, sum(bytes)/(1024*1024) from dba_segments where segment_name = '<TABLE_NAME>' group by segment_name;
Hi,
This delivers possibly wrong data in MCOD installations.
Depending on Oracle Version and Patchlevel dba_segments does not always have the correct data,
at any time esp. for indexes right after being rebuild parallel (Even in DB02 because it is using USER_SEGMENTS).
Takes a day to get the data back in line (never found out, who did the correction at night, could be RSCOLL00 ?).
Use above statement with "OWNER = " in WHERE for MCOD or connect as schema owner and use USER_SEGMENTS.
Use with
segment_name LIKE '<TABLE_NAME>%'
if you like to see the related indexes as well.
For partitioned objects, a join from dba_tables / dba_indexes to dba_tab_partitions/dba_ind_partitions to dba_segments
might be needed, esp. for hash partitioned tables, depending on how they have been created ( partition names SYS_xxxx).
Volker -
Query to find indexes bigger in size than tables sizes
Team -
I am looking for a query to find the list of indexes in a schema or in a entire database which are bigger in size than the respective tables size .
Db version : Any
Thanks
Venkatresults are the same in my case
1 select di.owner, di.index_name, di.table_name
2 from dba_indexes di, dba_segments ds
3 where ds.blocks > (select dt.blocks
4 from dba_tables dt
5 where di.owner = dt.owner
6 and di.leaf_blocks > dt.blocks
7 and di.table_name = dt.table_name)
8* and ds.segment_name = di.index_name
SQL> /
OWNER INDEX_NAME TABLE_NAME
SYS I_CON1 CON$
SYS I_OBJAUTH1 OBJAUTH$
SYS I_OBJAUTH2 OBJAUTH$
SYS I_PROCEDUREINFO1 PROCEDUREINFO$
SYS I_DEPENDENCY1 DEPENDENCY$
SYS I_ACCESS1 ACCESS$
SYS I_OID1 OID$
SYS I_PROCEDUREC$ PROCEDUREC$
SYS I_PROCEDUREPLSQL$ PROCEDUREPLSQL$
SYS I_WARNING_SETTINGS WARNING_SETTINGS$
SYS I_WRI$_OPTSTAT_TAB_OBJ#_ST WRI$_OPTSTAT_TAB_HISTORY
SYS I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST WRI$_OPTSTAT_HISTGRM_HISTORY
SYS WRH$_PGASTAT_PK WRH$_PGASTAT
SYSMAN MGMT_STRING_METRIC_HISTORY_PK MGMT_STRING_METRIC_HISTORY
DBADMIN TSTNDX TSTTBL
15 rows selected -
How to Increase the Rows and Columns Size of Bex Query in Enterprise Portal of SAP 7.3
Dear All,
Please let me know the process how to Increase the Rows and Columns Size of Bex Query in Enterprise Portal of SAP 7.3 .
Currently I am getting Only 4 columns and 10 rows in One Page .And I am getting 1,2 etc tabs for both row and column. So i want to increase the column length more than 100 and row length more than 10000.
Please suggest me a suitable solution to over come this issue.
Please find the Below screen shot.
Thanks
Regards,
SaiDear All,
Please find the attached screen shot.
The report be open with 4 or 5 columns and 5 or 6 rows.
So, please let me know how to increase the length of the table.
Do the needful for me to over come this issue.
Thanks
Regards,
Sai. -
we have 2 db's called UT & ST.. with same setup and data also same
running on hp-ux itanium 11.23 with same binary 9.2.0.6
one of schema called arb contain only materialised views in both db's and with same name of db link connect to same remote server in both db's...
in that schema of one table called rate has tablesize as 323 mb and st db, has same table rate has 480mb of tablesize, by querying the bytes of dba segement of table i found the difference.. query has follows
In UT db
select sum(bytes)/1024/1024 from dba_segments where segment_name='RATE'
output
323
In ST db
select sum(bytes)/1024/1024 from dba_segments where segment_name='RATE'
output
480mb
its quite strange, both of table, contain same ddl and same counts of records and initalextent and next extents, all storage parameter are same and same uniform size of 160k tablespace with both db..
ddl table of ut enviornment
SQL> select dbms_metadata.get_ddl('TABLE','RATE','ARB') from dual;
CREATE TABLE "ARB"."RATE"
( "SEQNUM" NUMBER(10,0) NOT NULL ENABLE,---------- ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 163840 NEXT 163840 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "AB_DATA"
ddl table of st enviornment
CREATE TABLE "ARB"."RATE"
( "SEQNUM" NUMBER(10,0) NOT NULL ENABLE,---------- ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 163840 NEXT 163840 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "AB_DATA"..
tablespace of st db
SQL> select dbms_metadata.get_ddl('TABLESPACE','AB_DATA') from dual;
CREATE TABLESPACE "AB_DATA" DATAFILE
'/koala_u11/oradata/ORST31/ab_data01ORST31.dbf' SIZE 1598029824 REUSE
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 163840 SEGMENT SPACE MANAGEMENT MANUAL
tablespace of ut db
SQL> select dbms_metadata.get_ddl('TABLESPACE','AB_DATA') from dual;
CREATE TABLESPACE "AB_DATA" DATAFILE
'/koala_u11/oradata/ORDV32/ab_data01ORDV32.dbf' SIZE 1048576000 REUSE
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 163840 SEGMENT SPACE MANAGEMENT MANUAL
why table size is difference?If everything is the same as you stated, i would guess the bigger table might have some free blocks. If you truncate the bigger one and insert /*+ append */ into bigger (select * from smaller) then check the size of bigger table, see what you can find. By the way, dba_segments, or dba_extents only gives the usage to extents level granulity, withing a extent, there are blocks might not be fully occupied. In order to get exact bytes of the space, you 'll need to use dbms_space package.
You may get some idear from the extream example I created below :
SQL>create table big (c char(2000));
Table created.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
128 -- my tablespace is LMT uniform sized 128KB
1 row selected.
SQL>begin
SQL> for i in 1..100 loop
SQL> insert into big values ('A');
SQL> end loop;
SQL>end;
SQL>/
PL/SQL procedure successfully completed.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
256 -- 2 extents after loading 100 records, 2KB+ each record
1 row selected.
SQL>commit;
Commit complete.
SQL>update big set c='B' where rownum=1;
1 row updated.
SQL>delete big where c='A';
99 rows deleted. -- remove 99 records at the end of extents
SQL>commit;
Commit complete.
SQL>select sum(bytes)/1024 kb from user_segments
SQL>where segment_name='BIG';
KB
256 -- same 2 extents 256KB since the HWM is not changed after DELETE
1 row selected.
SQL>select count(*) from big;
COUNT(*)
1 -- however, only 1 record occupies 256KB space(lots of free blocks)
1 row selected.
SQL>insert /*+ append */ into big (select 'A' from dba_objects where rownum<=99);
99 rows created. -- insert 99 records ABOVE HWM by using /*+ append */ hint
SQL>commit;
Commit complete.
SQL>select count(*) from big;
COUNT(*)
100
1 row selected.
S6UJAZ@dor_f501>select sum(bytes)/1024 kb from user_segments
S6UJAZ@dor_f501>where segment_name='BIG';
KB
512 -- same 100 records, same uniformed extent size, same tablespace LMT, same table
-- now takes 512 KB space(twice as much as what it took originally)
1 row selected. -
MySQL lock table size Exception
Hi,
Our users get random error pages from vibe/tomcat (Error 500).
If the user tries it again, it works without an error.
here are some errors from catalina.out:
Code:
2013-07-31 06:23:12,225 WARN [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:23:12,225 ERROR [http-8080-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:23:12,242 WARN [http-8080-8] [org.kablink.teaming.web.portlet.handler.LogContextInfoInterceptor] - Action request URL [http://vibe.*******.ch/ssf/a/do?p_name=ss_forum&p_action=1&entryType=4028828f3f0ed66d013f0f3ff208013d&binderId=2333&action=add_folder_entry&vibeonprem_url=1] for user [kablink,ro]
2013-07-31 06:23:12,245 WARN [http-8080-8] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
org.springframework.dao.InvalidDataAccessApiUsageException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry; nested exception is org.hibernate.TransientObjectException: object references an unsaved transient instance - save the transient instance before flushing: org.kablink.teaming.domain.FolderEntry
at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:654)
at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412)
at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:411)
2013-07-31 06:23:36,474 ERROR [Sitescape_QuartzSchedulerThread] [org.quartz.core.ErrorLogger] - An error occured while scanning for the next trigger to fire.
org.quartz.JobPersistenceException: Couldn't acquire next trigger: The total number of locks exceeds the lock table size [See nested exception: java.sql.SQLException: The total number of locks exceeds the lock table size]
at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2794)
at org.quartz.impl.jdbcjobstore.JobStoreSupport$36.execute(JobStoreSupport.java:2737)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.executeInNonManagedTXLock(JobStoreSupport.java:3768)
at org.quartz.impl.jdbcjobstore.JobStoreSupport.acquireNextTrigger(JobStoreSupport.java:2733)
at org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:264)
Caused by: java.sql.SQLException: The total number of locks exceeds the lock table size
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2870)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1573)
at com.mysql.jdbc.ServerPreparedStatement.serverExecute(ServerPreparedStatement.java:1169)
2013-07-31 06:27:12,463 WARN [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:27:12,463 ERROR [Sitescape_Worker-8] [org.jbpm.graph.def.GraphElement] - action threw exception: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: could not execute update query; uncategorized SQLException for SQL [update SS_ChangeLogs set owningBinderKey=?, owningBinderId=? where (entityId in (? , ?)) and entityType=?]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.orm.hibernate3.HibernateAccessor.convertJdbcAccessException(HibernateAccessor.java:424)
2013-07-31 06:27:22,393 INFO [CT-kablink] [org.kablink.teaming.lucene.LuceneProvider] - (kablink) Committed, firstOpTimeSinceLastCommit=1375251142310, numberOfOpsSinceLastCommit=12. It took 82.62174 milliseconds
2013-07-31 06:28:22,686 INFO [Sitescape_Worker-9] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252102500
2013-07-31 06:29:51,309 INFO [Sitescape_Worker-10] [org.kablink.teaming.jobs.CleanupJobListener] - Removing job send-email.sendMail-1375252191099
2013-07-31 06:32:08,820 WARN [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:08,820 ERROR [http-8080-2] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:10,775 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:10,775 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:12,305 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:12,305 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:14,605 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:14,606 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:16,056 WARN [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:16,056 ERROR [http-8080-3] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:24,166 WARN [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - SQL Error: 1206, SQLState: HY000
2013-07-31 06:32:24,166 ERROR [http-8080-1] [org.hibernate.util.JDBCExceptionReporter] - The total number of locks exceeds the lock table size
2013-07-31 06:32:24,167 WARN [http-8080-1] [org.kablink.teaming.spring.web.portlet.DispatcherPortlet] - Handler execution resulted in exception - forwarding to resolved error view
org.springframework.jdbc.UncategorizedSQLException: Hibernate flushing: could not insert: [org.kablink.teaming.domain.AuditTrail]; uncategorized SQLException for SQL [insert into SS_AuditTrail (zoneId, startDate, startBy, endBy, endDate, entityType, entityId, owningBinderId, owningBinderKey, description, transactionType, fileId, applicationId, deletedFolderEntryFamily, type, id) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'A', ?)]; SQL state [HY000]; error code [1206]; The total number of locks exceeds the lock table size; nested exception is java.sql.SQLException: The total number of locks exceeds the lock table size
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:83)
at org.springframework.orm.hibernate3.HibernateTransactionManager.convertJdbcAccessException(HibernateTransactionManager.java:805)
at org.springframework.orm.hibernate3.HibernateTransactionManager.convertHibernateAccessException(HibernateTransactionManager.java:791)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:664)
It always logs the Mysql error code 1206:
MySQL :: MySQL 5.4 Reference Manual :: 13.6.12.1 InnoDB Error Codes
1206 (ER_LOCK_TABLE_FULL)
The total number of locks exceeds the lock table size. To avoid this error, increase the value of innodb_buffer_pool_size.
The value of innodb_buffer_pool_size is set to 8388608 (8MB) on my server.
In the documentation (MySQL :: MySQL 5.4 Reference Manual :: 13.6.3 InnoDB Startup Options and System Variables) it says that the default is 128MB.
Can i set the value to 134217728 (128MB) or will this cause other problems? Will this setting solve my problem?
Thanks for your help.I already found an entry from Kablink:
https://kablink.org/ssf/a/c/p_name/s...beonprem_url/1
But i think this can't be a permanent solution...
Our MySQL Server version is 5.0.95 running on sles11 -
Newbie to EM - How to list of table sizes in a tablespace, kill sessions
Hi,
I'm used to querying data dictinary to find table sizes on disk and identifying sessions to kill.
How can I do these 2 thing swith Enterprise Manager.
Using Oracle 10g EM against Oracle 11.2.0.3 database.
ThanksYou should be able to find the first data/index block with the following, even on an empty table/index.
select header_file, header_block +1
from dba_segments
where segment_name = '<index or table name>'; -
Identify the Cubes where dimension table size exactly 100% of the fact tabl
Hello,
We want to Identify the Cubes where dimension table size exactly 100% of the fact table.
Is there any table or standard query which can give me this data?
Regards,
Shitaluse report (se38) SAP_INFOCUBE_DESIGNS
M. -
Hi,
I want to find the size of each size in one schema, can anyone tell me how to find the size of the tables?
-GKGK, being that you want the information for all tables you can leave off the sum from tomva's example though you might want to consider including the associated index storage in your space usage report either as separate objects or summed into their associated table.
You can query dba_indexes to find all indexes for a table and then join to dba_segments or dba_extents to get the storage data.
You can find all the rdbms dictionary views documented in the Oracle version# Reference manual.
HTH -- Mark D Powell -- -
Size of SQL query before execution and after execution
hi all
I need help on how can i find out the size of SQL query before execution and after execution in java
The query can be any query select / insert / update
Can anyone help me if any system tables help to find out the required size i mentioned
Urgent help is required
Thanking in advanceI need the size in terms of bytes
like the rquirement is stated as below
select ................: 10 B , return 250 B
so i need size before and after execution in terms of bytes -
"Convert Text to Table" Size limit issue?
Alphabetize a List
I’ve been using this well known work around for years.
Select your list and in the Menu bar click Format>Table>Convert Text to Table
Select one of the column’s cells (1st click selects entire table, 2nd click selects individual cell)
Open “Table Inspector” (Click Table icon at top of Pages document)
Make sure “table” button is selected, not “format” button
Choose Sort Ascending from the Edit Rows & Columns pop-up menu
Finally, click Format>Table>Convert Table to Text.
A few days ago I added items & my list was 999 items long, ~22 pages.
Tonight, I added 4 more items. Still the same # pages but now 1,003 items long.
Unable to Convert Text to Table! Tried for 45 minutes. I think there is a list length limit, perhaps 999 items?
I tried closing the document w/o any changes. Re-opening Pages & re-adding my new items to the end of the list as always & once again when I highlight list & Format>Table>Convert Text to Table .....nothing happens! I could highlight part of the list up to 999 items & leave the 4 new items unhighlighted & it works. I pasted the list into a new doc and copied a few items from the middle of the list & added them to the end of my new 999 list to make it 1003 items long (but different items) & did NOT work. I even attempted to add a single new item making the list an even 1000 items long & nope, not working. Even restarted iMac, no luck.
I can get it to work with 999 or fewer items easily as always but no way when I add even a single new item.
Anyone else have this problem? It s/b easy to test out. If you have a list of say, 100 items, just copy & repeatedly paste into a new document multiple times to get over 1,000 & see if you can select all & then convert it from text to table.
Thanks!
Pages 08 v 3.03
OS 10.6.8G,
Yes, Pages has a table size limit, as you have discovered. Numbers has a much greater capacity for table length, so if you do your sort in Numbers you won't have any practical limitation.
A better approach than switching to Numbers for the sort would be to download, install and activate Devon Wordservice. Then you could sort your list without converting it to a table.
Jerry -
Bex Query: Too many table names in the query The maximum allowable is 256
Hi Experts,
I need your help, Im working on a Query using a multiprovider of 2 datastores, I need to work with cells to assign specific acconts values to specific rows and columns, so I was creating a Structure with elements from a Hierarchy, but I get this error when I'm half way of the structure:
"Too many table names in the query. The maximum allowable is 256.Incorrect syntax near ')'.Incorrect syntax near 'O1'."
Any idea what is happening? is ti possible to fix it? do I need to ask for a modification of my Infoproviders? Some one told me is possible to combine 2 querys, is it true?
Thanks a lot for your time and pacience.Hi,
The maximum allowable limit is 256 holds true. It is the max no. of characteristics and key figures that can be used in the column side. While creating a structure, you create key figures (restricted or calculated) and formulas etc.. The objects that you use to create these should not be more than 256.
http://help.sap.com/saphelp_nw70/helpdata/EN/4d/e2bebb41da1d42917100471b364efa/frameset.htm
Not sure if combination of 2 query's is possible. You can use RRI. Or have a woorkbook with 2 queries.
Hope it helps. -
Power Query; How do I reference a Power Pivot table from a Power Query query
Hi,
It's pretty awesome how you can define Extract Transform and Load processes within Power Query without having to type in a single line of code. However how do I reference a Power Pivot table from a Power Query query to avoid me repeatedly accessing
the same data source (CSV) file with a view to increasing performance?
We are aware of the reference sub menu option with Power Query. However the new query created by the "reference" option still seems to refresh data from the data source (CSV) rather than just referencing the base query. Is this understanding
correct? There does seem to be a lot of hard disk activity when re-running the new query which is based on a base query rather than a data source. So we were hoping the new query would just need to reference the base query in memory rather than rescanning
the hard disk. Is there any way to ensure that the reference query just rescans the base query in memory?
Kind Regards,
Kieran.
Kieran Patrick Wood http://www.innovativebusinessintelligence.com http://uk.linkedin.com/in/kieranpatrickwood http://kieranwood.wordpress.com/Hi Kieran,
This sounds like something to be suggested for a future release. At the present time, Power Query will always re-run the entire Power Query query when refreshed. The Reference feature is analogous to a SQL view whereby the underlying query is always re-executed
when it's queried, or in this case refreshed. Even something like using the Power Query cache to minimise the amount of data re-read from the disk would be helpful for performance but the cache is only used for the preview data and stored locally.
It would be a good idea to suggest this feature to the Power BI team via the feedback smiley face.
Regards,
Michael Amadi
Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to vote it as helpful :)
Website: http://www.nimblelearn.com, Twitter:
@nimblelearn
Hi Michael,
Glad to hear from you about this. And thanks to Kieran for bringing a very good valid point to debate. Will be glad to see this in future release.
- please mark correct answers
Maybe you are looking for
-
Can't access Network share from Windows 8 computer
I have a clean install of Windows 8 x64 running in a workgroup environment. I can access multiple computer/NAS shares except one share off a partifular NAS. This share is accessible from other computers and the other shares on the NAS are accessible
-
Use same report with different MySQL database
I am using CR2008 with MySQL 5.1. I have a report that works with one of the tables (T1) in a MySQL database (DB1). I have another MySQL database (DB2) with the same table name and structure, except that the data has been updated. I want to use the
-
After Configuring JBoss 4.2 for LiveCycle 8.2 using preinstallsingle.pdf documentation on Linux platform . Deployed the following Jars to Jboss deploy/ adobe-livecycle-jboss.ear adobe-livecycle-native-jboss-x86_linux.ear Statred the JBoss and succes
-
Check size of infocube in terms of K (100 K or more dimension members)
Hi Experts, How can I check Infocube size in terms of K (100 K or more dimension members) ? Full points will be assigned. Thanks! Sapna
-
Hi, I have a problem with printing on java 1.5 on XP. Everything was fine on 1.42, but now the printing stops after a few pages and the text looks poor. There is no error message when the printing stops early. My swing application has several tabbed