BW Statistics for Infocubes
Hello Frnds,
I tried to swith on the statistics for a new infocube in production transported recently,
but i get a message ,
InfoCube ZSD_SALES blocked through Change and Transport Organizer
what is the solution or how to overcome this to switch on the statistics
Please help
Thanks,
raky
Hi,
in your prod-system try to find the transport containing your ZSD_SALES and release the transport, after doing that you can switch on statisrics.
/manfred
Similar Messages
-
Hi Everyone,
In the search criteria I found documents only for bw statistics for infocubes. but not for ods objects.
After executing the queries for the ods objects in the bex, the entries are not made in rsddstat table, then where else it is store.
Can anyone please let me know how can I find the query runtime of ods objects.
Thanks,
Prashant.Hi,
Yes I marked for the ods objects also for olap and whm, and executed the query in bex analyzer but still I do not get any entries for these objects in RSDDSTAT table.
What could be the problem. Is there a different procedure to find the query runtime of ods objects, please let me know.
Thanks,
Prashant. -
Hi,
Is there any Statistics report which gives list of queries on ODS and query performances?
I found reports for infocubes.
I need to look at reports on ODS which run slow.
Thanks,
Sai.Hi,
Look at this
Have you mark these ods objects in rsa1->tools->bw statistics for infoprovider ? checkbox for olap and whm ticked
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5401ab90-0201-0010-b394-99ffdb15235b -
Hi all,
When i go to Right click --> Manage button and then Performance tab of any infocube i get 3 areas for concern.
Check index which is Green for my infocubes.
Check aggr index is also Green for my infocubes.
Check Statistics --> This is where i am getting Yellow signal.
I just hope these statistics are different from what we generate from
DB13 as daily job.
I also tried to Create statistics with Create stats button and triggered
job BI_INDX_STATISTICS_BW successfully.
But even after finishing that job so many days , i am still getting
Yellow symbol for Check statistics button.
Please let me know what is exact procedure to generate stats for Infocube and when ???
Appreciate your quick replies.
Best Regards,
AjitRRefreshing the stats should start a job. Are you running it in background. NO job running would really sem to be the problem to look into. The job would refresh stats on the fact tables, the dimension tables and all the master data tables of characteristics in the cube.
RSRV should identify what tables are yellow.
Keep in mind, the Warning process used is nothing very sophisticated. SAP simply checks the date statistics were last collected on a table and if it is over 30 days old, flags it with a warning, regardless of how accurate/inaccurate the stats actually are. You very well may have tables that are being flagged because of the date last analyzed > 30, but if the table(s) being flagged really don't see have lots of records added, then the older stats are still perfectly fine.
The best method for handling DB statistics collection is to check with your DBA and make sure they are running BRCONNECT daily. It can be conifigured to refresh stats based on some percentage of table activity, e.g. addition of more than 25% new rows since last time stats were refreshed causes new stats on that table to be collected. If BRCONNECT is running regularly, then you don't need to worry about the yellow warnings - the tables that have regular activity will have stats updated by BRCONNECT, and tables that don't see much activity won't waste time rfreshing stats unnecessasarily. -
Are the statistics for Webtemplate colleted by the BW Statistics 7.0?
Hi Experts.
Would anybody know if the statistics for Webtemplates are also collected by the BW Statistics in the BI7.0? Example; if I want to know a information about how often that particulare webtemplate are used (not by BEX or RSRT) by the users, is it possible to get this information on the BW Statistics?
Thanksyes rody, statistics for Webtemplates are collected by the BI Statistics .
I hope you have BI statistics infoproviders installed .
In the the infocube 0tct_c01 --0tctbiobjct refers to the web template name.
key figure 0tctwtcount-- indicates the number of time the web template is accessed .
and 0tctusernm is the user acessing the web template.
regards
Priyanka -
1. How to create an explain plan with rowsource statistics for a complex query that include multiple table joins ?
When multiple tables are involved , and the actual number of rows returned is more than what the explain plan tells. How can I find out what change is needed in the stat plan ?
2. Does rowsource statistics gives some kind of understanding of Extended stats ?You can get Row Source Statistics only *after* the SQL has been executed. An Explain Plan midway cannot give you row source statistics.
To get row source statistics either set STATISTICS_LEVEL='ALL' in the session that executes theSQL OR use the Hint "gather_plan_statistics" in the SQL being executed.
Then use dbms_xplan.display_cursor
Hemant K Chitale -
How can i see visitor statistics for web page hosted on osx lion server
Hello
how can i see visitor statistics for web page hosted on osx lion server
Thanks
AdrianJust click inside the url address bar. Full url address highlighted, will appear.
Best. -
Error when running report: "Error in aggregate table for InfoCube"
Hi Experts
We had a temporary error, which I would like to find the root cause for.
We where running a Workbook which is based on a multiprovider. For a short period of time (around 10 minuts) we got the following error when we executed the workbook based on this multiprovider:
"Error in aggregate table for InfoCube"
There where no loads or aggregates rolling up on the cubes in the multiprovider.
I see no short dumps as well in ST22.
Have anybody seen this error before, and how can I trace how this error occured?
Thanks in advance.
Kind regards,
TorbenHi Sneha,
I will Suggest u to do some RSRV tests.
Go to transaction RSRV. There you will find test for aggregates. Just Perform them and see whether you get any discrepencies.
Regds,
Shashank
Edited by: Shashank Dighe on Jan 4, 2008 10:51 AM -
Disable Statistics for specific Tables
Is it possible to disable statistics for specific tables???
If you want to stop gathering statistics for certain tables, you would simply not call DBMS_STATS.GATHER_TABLE_STATS on those particular tables (I'm assuming that is how you are gathering statistics at the moment). The old statistics will remain around for the CBO, but they won't be updated. Is that really what you want?
If you are currently using GATHER_SCHEMA_STATS to gather statistics, you would have to convert to calling GATHER_TABLE_STATS on each table. You'll probably want to have a table set up that lists what tables to exclude and use that in the procedure that calls GATHER_TABLE_STATS.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC -
To get Run time Statistics for a Data target
Hello All,
I need to collect one month data (ie.Start time and End time of the cube) for the documentation work. Could someone help me to find out the easiest way to get above mentioned data in BW Production system.
Please guide me to know the query name to get the runtime statistics for the cube
Thanks in advance,
Anjaliit will fetch the data if the BI stats are turned on for that cube.....
please verify these links
http://help.sap.com/saphelp_nw04s/helpdata/en/8c/131e3b9f10b904e10000000a114084/content.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/43/15c54048035a39e10000000a422035/frameset.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm -
Help,why brconnect do not collect statistics for mseg table?
I found "MSEG" table`s statistics is too old.
so i check logs in db13,and the schedule job do not collect statistics for "MSEG".
Then i execute manually: brconnect -c -u system/system -f stats -t mseg -p 4
this command still do not collect for mseg.
KS1DSDB1:oraprd 2> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
BR0801I BRCONNECT 7.00 (46)
BR0154E Unexpected option value 'u2013f' found at position 8
BR0154E Unexpected option value 'collect' found at position 9
BR0806I End of BRCONNECT processing: ceenwjre.log 2010-11-12 08.41.38
BR0280I BRCONNECT time stamp: 2010-11-12 08.41.38
BR0804I BRCONNECT terminated with errors
KS1DSDB1:oraprd 3> brconnect -c -u system/system -f stats -t mseg -p 4
BR0801I BRCONNECT 7.00 (46)
BR0805I Start of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.04
BR0484I BRCONNECT log file: /oracle/PRD/sapcheck/ceenwjse.sta
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.11
BR0813I Schema owners found in database PRD: SAPPRD*, SAPPRDSHD+
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.12
BR0807I Name of database instance: PRD
BR0808I BRCONNECT action ID: ceenwjse
BR0809I BRCONNECT function ID: sta
BR0810I BRCONNECT function: stats
BR0812I Database objects for processing: MSEG
BR0851I Number of tables with missing statistics: 0
BR0852I Number of tables to delete statistics: 0
BR0854I Number of tables to collect statistics without checking: 0
BR0855I Number of indexes with missing statistics: 0
BR0856I Number of indexes to delete statistics: 0
BR0857I Number of indexes to collect statistics: 0
BR0853I Number of tables to check (and collect if needed) statistics: 1
Owner SAPPRD: 1
MSEG
BR0846I Number of threads that will be started in parallel to the main thread: 4
BR0126I Unattended mode active - no operator confirmation required
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
BR0817I Number of monitored/modified tables in schema of owner SAPPRD: 1/1
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
BR0877I Checking and collecting table and index statistics...
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.16
BR0879I Statistics checked for 1 table
BR0878I Number of tables selected to collect statistics after check: 0
BR0880I Statistics collected for 0/0 tables/indexes
BR0806I End of BRCONNECT processing: ceenwjse.sta 2010-11-12 08.42.16
BR0280I BRCONNECT time stamp: 2010-11-12 08.42.17
BR0802I BRCONNECT completed successfully
the log says:
Number of tables selected to collect statistics after check: 0
Could you give some advices? thanks a lot.Hello,
If you would like to force the creation of that stats for table MSEG you need to use the -f (force) switch.
If you leave out the -f switch the parameter from stats_change_threshold is taken like you said correctly:
[http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/02/0ae0c6395911d5992200508b6b8b11/content.htm]
[http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm|http://help.sap.com/saphelp_nw70ehp1/helpdata/EN/cb/f1e33a5bd8e934e10000000a114084/content.htm]
You have tried to do this in your second example :
==> brconnect -c -u system/system -f stats -t mseg u2013f collect -p 4
Therefore you received:
BR0154E Unexpected option value 'u2013f' found at position 8
BR0154E Unexpected option value 'collect' found at position 9
This is the correct statement, however the hyphen in front of the f switch is not correct.
Try again with the following statement (-f in stead of u2013f) you will see that it will work:
==> brconnect -c -u system/system -f stats -t mseg -f collect -p 4
I hope this can help you.
Regards.
Wim -
Which Event Classes i should use for finding good indexs and statistics for queries in SP.
Dear all,
I am trying to use pro filer to create a trace,so that it can be used as workload in
"Database Engine Tuning Advisor" for optimization of one stored procedure.
Please tel me about the Event classes which i should use in trace.
The stored proc contains three insert queries which insert data into a table variable,
Finally a select query is used on same table variable with one union of the same table variable, to generate a sequence for records based on certain condition of few columns.
There are three cases where i am using the above structure of the SP, so there are three SPS out of three , i will chose one based on their performance.
1) There is only one table with three inserts which gets into a table variable with a final sequence creation block.
2) There are 15 tables with 45 inserts , which gets into a tabel variable with a final
sequence creation block.
3)
There are 3 tables with 9 inserts , which gets into a table variable with a final
sequence creation block.
In all the above case number of record will be around 5 lacks.
Purpose is optimization of queries in SP
like which Event Classes i should use for finding good indexs and statistics for queries in SP.
yours sincerely"Database Engine Tuning Advisor" for optimization of one stored procedure.
Please tel me about the Event classes which i should use in trace.
You can use the "Tuning" template to capture the workload to a trace file that can be used by the DETA. See
http://technet.microsoft.com/en-us/library/ms190957(v=sql.105).aspx
If you are capturing the workload of a production server, I suggest you not do that directly from Profiler as that can impact server performance. Instead, start/stop the Profiler Tuning template against a test server and then script the trace
definition (File-->Export-->Script Trace Definition). You can then customize the script (e.g. file name) and run the script against the prod server to capture the workload to the specified file. Stop and remove the trace after the workload
is captured with sp_trace_setstatus:
DECLARE @TraceID int = <trace id returned by the trace create script>
EXEC sp_trace_setstatus @TraceID, 0; --stop trace
EXEC sp_trace_setstatus @TraceID, 2; --remove trace definition
Dan Guzman, SQL Server MVP, http://www.dbdelta.com -
Managing statistics for object collections used as table types in SQL
Hi All,
Is there a way to manage statistics for collections used as table types in SQL.
Below is my test case
Oracle Version :
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE 11.2.0.3.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
SQL> Original Query :
SELECT
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:02.90
Execution Plan
Plan hash value: 3970072279
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 1 | HASH JOIN | | 1 | 194 | 4567 (2)| 00:00:55 |
|* 2 | HASH JOIN | | 8168 | 287K| 695 (3)| 00:00:09 |
| 3 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 5 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | TG_FILE | 565K| 12M| 659 (2)| 00:00:08 |
| 7 | TABLE ACCESS FULL | TG_FILE_DATA | 852K| 128M| 3863 (1)| 00:00:47 |
Predicate Information (identified by operation id):
1 - access("TF"."FILE_ID"="TFD"."FILE_ID" AND "TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
2 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
Statistics
7 recursive calls
0 db block gets
16783 consistent gets
16779 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed Indexes are present in both the tables ( TG_FILE, TG_FILE_DATA ) on column FILE_ID.
select
index_name,blevel,leaf_blocks,DISTINCT_KEYS,clustering_factor,num_rows,sample_size
from
all_indexes
where table_name in ('TG_FILE','TG_FILE_DATA');
INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE
TG_FILE_PK 2 2160 552842 21401 552842 285428
TG_FILE_DATA_PK 2 3544 852297 61437 852297 852297 Ideally the view should have used NESTED LOOP, to use the indexes since the no. of rows coming from object collection is only 2.
But it is taking default as 8168, leading to HASH join between the tables..leading to FULL TABLE access.
So my question is, is there any way by which I can change the statistics while using collections in SQL ?
I can use hints to use indexes but planning to avoid it as of now. Currently the time shown in explain plan is not accurate
Modified query with hints :
SELECT
/*+ index(tf TG_FILE_PK ) index(tfd TG_FILE_DATA_PK) */
9999,
tbl_typ.FILE_ID,
tf.FILE_NM ,
tf.MIME_TYPE ,
dbms_lob.getlength(tfd.FILE_DATA)
FROM
TG_FILE tf,
TG_FILE_DATA tfd,
SELECT
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
tbl_typ
WHERE
tf.FILE_ID = tfd.FILE_ID
AND tf.FILE_ID = tbl_typ.FILE_ID
AND tfd.FILE_ID = tbl_typ.FILE_ID;
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 1670128954
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 194 | 29978 (1)| 00:06:00 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 194 | 29978 (1)| 00:06:00 |
| 3 | NESTED LOOPS | | 8168 | 1363K| 16379 (1)| 00:03:17 |
| 4 | VIEW | | 8168 | 103K| 29 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 8168 | 16336 | 29 (0)| 00:00:01 |
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | TG_FILE_DATA | 1 | 158 | 2 (0)| 00:00:01 |
|* 8 | INDEX UNIQUE SCAN | TG_FILE_DATA_PK | 1 | | 1 (0)| 00:00:01 |
|* 9 | INDEX UNIQUE SCAN | TG_FILE_PK | 1 | | 1 (0)| 00:00:01 |
| 10 | TABLE ACCESS BY INDEX ROWID | TG_FILE | 1 | 23 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
8 - access("TFD"."FILE_ID"="TBL_TYP"."FILE_ID")
9 - access("TF"."FILE_ID"="TBL_TYP"."FILE_ID")
filter("TF"."FILE_ID"="TFD"."FILE_ID")
Statistics
0 recursive calls
0 db block gets
16 consistent gets
8 physical reads
0 redo size
916 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
Thanks,
BThanks Tubby,
While searching I had found out that we can use CARDINALITY hint to set statistics for TABLE funtion.
But I preferred not to say, as it is currently undocumented hint. I now think I should have mentioned it while posting for the first time
http://www.oracle-developer.net/display.php?id=427
If we go across the document, it has mentioned in total 3 hints to set statistics :
1) CARDINALITY (Undocumented)
2) OPT_ESTIMATE ( Undocumented )
3) DYNAMIC_SAMPLING ( Documented )
4) Extensible Optimiser
Tried it out with different hints and it is working as expected.
i.e. cardinality and opt_estimate are taking the default set value
But using dynamic_sampling hint provides the most correct estimate of the rows ( which is 2 in this particular case )
With CARDINALITY hint
SELECT
/*+ cardinality( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 10 | 29 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With OPT_ESTIMATE hint
SELECT
/*+ opt_estimate(table, e, scale_rows=0.0006) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Execution Plan
Plan hash value: 4043204977
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 485 | 29 (0)| 00:00:01 |
| 1 | VIEW | | 5 | 485 | 29 (0)| 00:00:01 |
| 2 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 5 | 10 | 29 (0)| 00:00:01 |
| 3 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
With DYNAMIC_SAMPLING hint
SELECT
/*+ dynamic_sampling( e, 5) */*
FROM
TABLE
SELECT
CAST(TABLE_ESC_ATTACH(OBJ_ESC_ATTACH( 9999, 99991, 'file1.png', NULL, NULL, NULL),
OBJ_ESC_ATTACH( 9999, 99992, 'file2.png', NULL, NULL, NULL)) AS TABLE_ESC_ATTACH)
FROM
dual
) e ;
Elapsed: 00:00:00.00
Execution Plan
Plan hash value: 1467416936
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 4 | 11 (0)| 00:00:01 |
| 1 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 2 | 4 | 11 (0)| 00:00:01 |
| 2 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
Note
- dynamic sampling used for this statement (level=2)I will be testing the last option "Extensible Optimizer" and put my findings here .
I hope oracle in future releases, improve the statistics gathering for collections which can be used in DML and not just use the default block size.
By the way, are you aware why it uses the default block size ? Is it because it is the smallest granular unit which oracle provides ?
Regards,
B -
Oracle 11g upgrade: How to update stale statistics for sys and sysman?
Hi,
I am in the process of testing Oracle 11g upgrade from Oracle 10.2.0.3. I have run utlu111i.sql on the 10g database.
The utility utlu111i.sql reports about the stale statistics for SYS and SYSMAN components.
I executed dbms_stats.gather_dictionary_stats; dbms_stats.gather_schema_stats('SYS'); and dbms_stats.gather_schema_stats('SYSMAN');
After that the utlu111i.sql still reports the stale statistics for sys and sysman. Does anyone know how to get rid off this warning successfully?
Thanks,
SreekanthDoes anyone know how to get rid off this warning successfully?Just ignore the warnings. Check The Utlu111i.Sql Pre-Upgrade Script Reports Stale Sys Statistics - 803774.1 from Metalink.
-
Create new CBO statistics for the tables
Dear All,
I am facing bad performance in server.In SM50 I see that the read and delete process at table D010LINC takes
a longer time.how to create new CBO statistics for the tables D010TAB and D010INC. Please suggest.
Regards,
KumarHi,
I am facing problem in when save/activating any problem ,so sap has told me to create new CBO statistics for the tables D010TAB and D010INC
Now as you have suggest when tx db20
Table D010LINC
there error comes Table D010LINC does not exist in the ABAP Dictionary
Table D010TAB
Statistics are current (|Changes| < 50 %)
New Method C
New Sample Size
Old Method C Date 10.03.2010
Old Sample Size Time 07:39:37
Old Number 51,104,357 Deviation Old -> New 0 %
New Number 51,168,679 Deviation New -> Old 0 %
Inserted Rows 160,770 Percentage Too Old 0 %
Changed Rows 0 Percentage Too Old 0 %
Deleted Rows 96,448 Percentage Too New 0 %
Use O
Active Flag P
Analysis Method C
Sample Size
Please suggest
Regards,
Kumar
Maybe you are looking for
-
Can't get speakers to work after using ear buds what happened?
I used ear buds once and now I can't get my internal speakers to work. In "system preferences" sound output I can't select internal speakers. it just has "digital out"
-
HT4759 enable icloud in system preferences instead of mobile me
I've upgraded my macbook pro to the latest i0s version today. I have aso downloaded and installed itunes. Still when I go into System Preferences on my MacBook Pro I see Mobile Me under Internet and Wireless instead of iCloud. Can you help me to un
-
Final Cut Crashes on startup "GNX4 editor" shows in info window
I recently bought Final Cut Express 4 - I had used Final Cut Studio on an identical machine at work (that was running OS 10.4) and it worked fine. The installation went perfectly, but when I try to launch the application, it loads almost completely -
-
'Private' web galleries & iPhone display problem.
I shoot weddings and a number of my brides now have iPhones. I recently published a web gallery for a sneak preview of a recent wedding. I sent the link to the bride so she could view it on her honeymoon. She responded stating she could see 4 galleri
-
I have worked on web services as beginner using NetBeans. It makes everything so simple that we dont gain any knowledge. Can explain me or show any link on how to create a Hello World Web Service using command line. Thanks a lott in advance.