Queries in BW system
Hi
I'm get some more information on Queries in BW system.
1. How many queries run on daily basis and how many queries run in parallel. Is there way to find out it? I checked with table 'RSDDSTAT' and it is hard to understand from there. I don't understand the starttime format '20.080.730.140.839,1668870'. I try with 'RSRT' and got the following information:
STATUID -
> 4ANJOK50HR3QGQ0P7JSI14QWA
SESSIONUID -
> 4ANJOKCP0PPFZCK5DDUUB6PM2
NAVSTEPUID -
> 4ANJOKKDJOB5HZ3LJ7X6L8OBU
INFOCUBE -
> 0COOM_C02
HANDLE -
> 1
QUERYID -
> 0COOM_C02/Y_0COOM_C02_QW003
UNAME -
> NAGI_A
QNACHLESEN -
> H
OLAPMODE -
> 1
QRUNTIMECATEGORY--> 5
QNAVSTEP -
> 3
QNUMOLAPREADS ---> 1
QDBSEL -
> 229.497
QDBTRANS -
> 3.819
QNUMCELLS ---> 3.156
QNUMRANGES --> 72
RECCHAVLREAD --> 264
QTIMEOLAPINIT ---> 0,214844
QTIMEOLAP -
> 0,414064
QTIMEDB -
> 3,191406
QTIMEVARDP -
> 1,589844
QTIMECLIENT -
> 0,062500
TIMECHAVLREAD --> 0,144532
TIMEREST -
> 1,660156
DMTDBBASIC -
> 3,125000
STARTTIME -
> 20.080.730.140.839,1668870
is there any link to understand all these above mentioned parameter?
2. I have checked in our system that some of queries access the data from ODS instead of infocube. Is there any benefit to run the queries in ODS instead of infocube? In our company, BW application team execute one query to access the data from ODS. I don't know the reason for it.
3. With 'RSRT', we can execute performance information for infocube. is there any tcode or tool to check the performance of queries that access the data from ODS?
Thanks
Amar
Hi Amarjit,
The choice of ODS or cube for reporting depends upon your requirements. It depends upon several factors like the details in which you want to see the reports, the amount of data you will be looking at etc. Since ODS is a flat structure (transparent table), you can not store large amounts of data. If you want to see some reports which contains data for several years (huge), you are better off running a report on a cube. Otherwise, reporting on ODS is ok. It all depends on your requirement.
Refer:
[Re: ODS Reporting]
[Difference between ODS & Cube Reporting]
[ODS vs Cube reporting;
Hope this helps you.
Regards,
Yokesh.
Similar Messages
-
Deleting Queries in Production system
Hello,
I transports few queries into production system but I would like to delete those queries
Can i just delete in production or is there any procedure?
These queries are just brand new queries.
Thanks,What is the difference between "Create Transport Requests for BEx " and "Transport object"
When do we use "Create Transport Requests for BEx "?
When do we use "Transport object"?
Correct answer = points
Thanks,
Edited by: Moderator on Jan 30, 2008 3:41 PM
*Everyone on the forum knows that points will be assigned for correct and useful replies. There is really no need to keep mentioning this on each post you create. Thanks! -
How to create a transport for queries from production system
We have a situation where the production client has been open during cutover and a user has modified queries which were previously transported into production from our development system. We now wish to 'synchronise' our systems so production can be closed (although we will use the changeability options to allow maintenance of local queries).
Does anyone have experience of this problem? What is the best way of exporting the queries in such a way that we can import them into our development and quality systems?
As ever, any suggestions will be gratefully received.Hi,
In similar situation, we opted for manual modifications in the Dev. We made the changes in the queries as they were in Prod and then transported them from Dev to Q and then to Prod.
This is how generally it is done.
Regards,
Yogesh -
List of all queries in the system
Hi all
<b>I want to get a list of all QUERIES in my BW system and the details of the DATA TARGETS (cube or ODS) on which they are made.</b>
Can i get it from any TABLE or TRANSACTION? or will i have to go to each query ( grouped by INFOAREA) to do the same. This i felt will be very tedious. Please advise.
Regards,
Pradyut.Hi Pradyut,
Use this table instead - RSZCOMPIC. Give the selection parameter as cube/ods technical name and the version as active.
This would give the list of technical name of the queries for a Cube/ODS. To get the description of the query use the table RSZCOMPDIR -> give the technical name of the query as input parameter and get the desription. The description is stores in the COMPID field.
Hope this helps.
Bye
Dinesh
(Do not forget to assign points!!) -
Hi All,
We have a scenario where we are developing and transporting a query from sandbox to production. If the same query with the same techincal name is developed and transported from another system to production will the query be duplicated.
Thanks in Advance.The query keys aren't the technical name, it is an internally assigned GUID. If you take a look at the RSZELTDIR table, the key is ELTID (Query Element ID) which is this internally assigned GUID. The MAPNAME column on this table is where the technical name of the query is stored.
So, if you create two queries with the same technical name, you can transport them both through your landscape and they will both coexist. If you're using the SAP BW Portal, updates to the published query will be affected. If you modify a query that has the same technical name, but isn't the query that was published (different Query Element ID), the query in the portal will not change. -
Transport user groups/infosets/queries from test system to productive
Hello experts,
i'm trying to transport user groups/infosets/queries but it seems it doesn't work. For example, i select the infoset, then the transport button.
I choose EXPORT
then Transport Infosets
I type the infoset name
but i cannot understand if i should change the "IMPORT OPTION-REPLACE".
Can anyone guide me with this procedure?
Thank you in Advance
N.Hi Nantia,
Step 1. Run RSAQR3TR in 'old' system. Select EXPORT and specify the objects to be transported.
(System includes these in an automatically created transport request)
Step 2. Release this transport and request it be transported to 'new' system.
(This results in the entries being populated in transport table AQTDB in 'new' system).
Step 3. Run RSAQR3TR in 'new' system. Select IMPORT and tick the Overwrite checkbox. Specify the transport number in the field labelled 'dataset with imports'.
(RSAQR3TR gives the message that stuff has been imported OK).
Step 4. In my example I still couldn't find the queries, so ran RSAQR3TR one more time, this time specifying 'Copy Standard Area -> Global Area'. -
Hi,
I have the following queries:
1. Is it possible to Allocate one material with same Order type for Diiferent standard networks / Network alternatives in CN08
2. Is there any report in PS where in Planned and actual man mandays worked in a project can be be seen. In my scenario, the actual man days worked is transferred to PS by t code CATA.
3. How to handle sub project scenario under Assembly processing with strategy 85
4. What is meant by Work force planning? Is it Deployement of manpower in Project?
Rgds,
PrasadIn Workforce planning specified work can be distributed to persons assigned to the work center
Benefits of Workforce planning:
- Persons can be assigned to activities quickly and easily
- Data display:
- Availability of the person (read from HR)
- Total capacity load on the person
- Activity data
You can assign persons who
a) are assigned to the work center of the activity,
b) belong to the project team, or
c) are available in HR. -
Running queries coming from different systems
Hello All
I have a scenario,
We are using BI version 7.0,
The requirement is to run queries in BEX analyzer coming from 2 different SaPBW systems.
For example :
We have two BW systems A and B,
In the First tab we have query running based on the BW system A and in the second tab query running on Bw system B.
When we try to run the second query in the second tab the first query vanishes,
Can anybody help me, is it possible to run the 2 different queries from different systems???
Regards,
RaviHi Ravi,
Even using RRI you will not be able to combine reports from different systems in single workbook. RRI is generally used for Summery to detail level reporting. In such scenarios from first report you click on particular characteristic and other details related to this characteristics are shown in different report coming from any other source.
If you want this kind of report to run then either you will have to build a Virtual provider in one of the BI system making source as another system and then create a query and use it in same workbook.
In this way your report might take some time to execute but you don't need to load all your data from one system to other.
Regards,
Durgesh. -
Error message getting displayed in testing system not in development system
Hi All,
I am working on BW 3.5 BEx Query Designer.
I have transported my queries to quality system twice as there were few changes in the variables.
now after the second transports, when I execute the queries in quality system I am getting one error message at the top of report output stating "variable name" does not exist.
I am not able to trace out why the error message is getting displayed while query is working fine in quality system.
If it is due to some variable that has been removed or deleted....will it impact on production move?
how to resolve this problem?
Please provide me some inputs.
Thanks,
Rashmi.
Edited by: Rashmi on Aug 6, 2008 1:29 PMRashmi,
Once again Create transport request in RSA1 ans while creating it you need to collect the all the query element.
Transport it again into quality and do the testing as per the business requirment
It would help to resolve the issue -
Hi
We are Working on Bi7 ,We are not able to execute the Queries in EXVEL ,system is throwing the following error
PLEASE INSTALL EXCEL AS VIEWER.
<b> We installed Excel viewer ,we are not able to solve the issue</b>
We had Insstalled .net Frame work 2.0 version and BI7 Gui and GUI PATCHES.
Pls Help ,waiting for your replies
cheers
purushottamHi Purushottam,
It is mainly due to ur SAP GUI installation , try to instal SAP GUI at ur Desktop once more with BW Component TIcked while Installing . To be at the safer side please check below mentioned details ...
1. In ur Role check Authorization object S_RFC, S_COMP and S_COMP1 is added to ur role .
2. If u r trying to open Workbook then Make sure u have Authorization to S_GUI object .
Regards,
Vijay. -
Precautions to be taken while changing the Query read mode in PED system
Dear Experts,
I got a task to change the Query Read mode for more No. of queries in Production system directly itself.
Request you to let me what are the steps to be followed/precautions to be taken while changing the Query Read mode in PED.
one more doubt regarding this.....If i plan to change the Read mode of Query say Q1, how can i come to know weather the Q1 is executing or not executing by that time.
Thanks in advance for valuable response.
Thanks & Regards,
Ramesh - KumarHello,
You can change the query read mode in transaction 'RSRT', here are the steps:
1) Access transaction RSRT and enter the query name
2) Select the 'Properties' option
3) Un-check the 'Info provider' setting next to the 'Read mode' --> This enables you to change the read mode of the query --> Execute
4) Choose the option 'Generate Report' to re-generate the query program
Hope this info helps.
Thanks
Bala -
[Fwd: Re: Integration with CM Systems ...]
-------- Original Message --------
Subject: Re: Integration with CM Systems ...
Date: Thu, 10 Aug 2000 15:47:21 -0600
From: Cindy Eldenburg <[email protected]>
Organization: BEA Systems, Inc.
Newsgroups: weblogic.developer.interest.personalization
References: <398f3c55$[email protected]>
Prashanth,
The are a lot of differences in what the Documentum does as compared to
Interwoven Teamsite. Comparing these items is like comparing apples and
oranges.
Normally in Teamsite, during the document capture process, a Teamsite
user
categorizes documents by specifying the documents' metadata attributes
using
Teamsite templates. Once the documents are captured, the Interwoven
OpenDeploy
workflow mechanism is used to publish the content to the WLCS database.
Unlike Interwoven Teamsite, Documentum products manage the metadata and
documents in their own repositories. Thus, with this integration, WLCS
queries
the Documentum system at runtime via a specialized JDBC driver supplied
by
Documentum. Once the Documentum user captures a document and tags it
with
metadata attributes, the document may be immediately available to WLCS
(depending on administrative options in Documentum, such as whether
staging is
involved).
Please contact your sales person for a more detailed overview of what
these two
products do and how they interface to WLPS.
Cindy Eldenburg
Prashanth A wrote:
Can anyone explain to me clearly what does Weblogic mean when it says it
integrated with Interwoven and Documentum. Is there any difference in the
way it interacts with Interwoven as compared to Documentum. As in a run-time
interaction with Documentum whereas a static integration with InterwovenPrashanth,
The are a lot of differences in what the Documentum does as compared to
Interwoven Teamsite. Comparing these items is like comparing apples and
oranges.
Normally in Teamsite, during the document capture process, a Teamsite
user
categorizes documents by specifying the documents' metadata attributes
using
Teamsite templates. Once the documents are captured, the Interwoven
OpenDeploy
workflow mechanism is used to publish the content to the WLCS database.
Unlike Interwoven Teamsite, Documentum products manage the metadata and
documents in their own repositories. Thus, with this integration, WLCS
queries
the Documentum system at runtime via a specialized JDBC driver supplied
by
Documentum. Once the Documentum user captures a document and tags it
with
metadata attributes, the document may be immediately available to WLCS
(depending on administrative options in Documentum, such as whether
staging is
involved).
Please contact your sales person for a more detailed overview of what
these two
products do and how they interface to WLPS.
Cindy Eldenburg
Prashanth A wrote:
Can anyone explain to me clearly what does Weblogic mean when it says it
integrated with Interwoven and Documentum. Is there any difference in the
way it interacts with Interwoven as compared to Documentum. As in a run-time
interaction with Documentum whereas a static integration with Interwoven -
How to improve speed of queries that use ORM one table per concrete class
Hi,
Many tools that make ORM (Object Relational Mapping) like Castor, Hibernate, Toplink, JPOX, etc.., have the one table per concrete class feature that maps objects to follow structure:
CREATE TABLE ABSTRACTPRODUCT (
ID VARCHAR(8) NOT NULL,
DESCRIPTION VARCHAR(60) NOT NULL,
PRIMARY KEY(ID)
CREATE TABLE PRODUCT (
ID VARCHAR(8) NOT NULL REFERENCES ABSTRACTPRODUCT(ID),
CODE VARCHAR(10) NOT NULL,
PRICE DECIMAL(12,2),
PRIMARY KEY(ID)
CREATE UNIQUE INDEX iProduct ON Product(code)
CREATE TABLE BOOK (
ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
AUTHOR VARCHAR(60) NOT NULL,
PRIMARY KEY (ID)
CREATE TABLE COMPACTDISK (
ID VARCHAR(8) NOT NULL REFERENCES PRODUCT(ID),
ARTIST VARCHAR(60) NOT NULL,
PRIMARY KEY(ID)
there is a way to improve queries like
SELECT
pd.code CODE,
abpd.description DESCRIPTION,
DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
FROM
ABSTRACTPRODUCT abpd,
PRODUCT pd,
BOOK bk,
COMPACTDISK cd
WHERE
pd.id = abpd.id AND
bk.id(+) = abpd.id AND
cd.id(+) = abpd.id AND
pd.code like '101%'
or like this:
SELECT
pd.code CODE,
abpd.description DESCRIPTION,
DECODE(bk.id,NULL,cd.artist,bk.author) PERSON
FROM
ABSTRACTPRODUCT abpd,
PRODUCT pd,
BOOK bk,
COMPACTDISK cd
WHERE
pd.id = abpd.id AND
bk.id(+) = abpd.id AND
cd.id(+) = abpd.id AND
abpd.description like '%STARS%' AND
pd.price BETWEEN 1 AND 10
think in a table with many rows, then exists something inside MaxDB to improve this type of queries? like some anotations on SQL? or declare tables that extends another by PK? on other databases i managed this using Materialized Views, but i think that this can be faster just using PK, i'm wrong? the better is to consolidate all tables in one table? what is the impact on database size with this consolidation?
note: with consolidation i will miss NOT NULL constraint at database side.
thanks for any insight.
ClóvisHi Lars,
i dont understand because the optimizer get that Index for TM at execution plan, and because dont use the join via KEY column, note the WHERE clause is "TM.OID = MF.MY_TIPO_MOVIMENTO" by the key column, and the optimizer uses an INDEX that the indexed column is ID_SYS, that isnt and cant be a primary key, because its not UNIQUE, follow the index columns:
indexes of TipoMovimento
INDEXNAME COLUMNNAME SORT COLUMNNO DATATYPE LEN INDEX_USED FILESTATE DISABLED
ITIPOMOVIMENTO TIPO ASC 1 VARCHAR 2 220546 OK NO
ITIPOMOVIMENTO ID_SYS ASC 2 CHAR 6 220546 OK NO
ITIPOMOVIMENTO MY_CONTA_DEBITO ASC 3 CHAR 8 220546 OK NO
ITIPOMOVIMENTO MY_CONTA_CREDITO ASC 4 CHAR 8 220546 OK NO
ITIPOMOVIMENTO1 ID_SYS ASC 1 CHAR 6 567358 OK NO
ITIPOMOVIMENTO2 DESCRICAO ASC 1 VARCHAR 60 94692 OK NO
after i create the index iTituloCobrancaX7 on TituloCobranca(OID,DATA_VENCIMENTO) in a backup instance and get surprised with the follow explain:
OWNER TABLENAME COLUMN_OR_INDEX STRATEGY PAGECOUNT
TC ITITULOCOBRANCA1 RANGE CONDITION FOR INDEX 5368
DATA_VENCIMENTO (USED INDEX COLUMN)
MF OID JOIN VIA KEY COLUMN 9427
TM OID JOIN VIA KEY COLUMN 22
TABLE HASHED
PS OID JOIN VIA KEY COLUMN 1350
BOL OID JOIN VIA KEY COLUMN 497
NO TEMPORARY RESULTS CREATED
JDBC_CURSOR_19 RESULT IS COPIED , COSTVALUE IS 988
note that now the optimizer gets the index ITITULOCOBRANCA1 as i expected, if i drop the new index iTituloCobrancaX7 the optimizer still getting this execution plan, with this the query executes at 110 ms, with that great news i do same thing in the production system, but the execution plan dont changes, and i still getting a long execution time this time at 413516 ms. maybe the problem is how optimizer measure my tables.
i checked in DBAnalyser that the problem is catalog cache hit rate (we discussed this at [catalog cache hit rate, how to increase?|;
) and the low selectivity of this SQL command, then its because of this that to achieve a better selectivity i must have an index with, MF.MY_SACADO, MF.TIPO and TC.DATA_VENCIMENTO, as explained in previous posts, since this type of index inside MaxDB isnt possible, i have no choice to speed this type of query without changing tables structure.
MaxDB developers can develop this type of index? or a feature like this dont have any plans to be made?
if no, i must create another schema, to consolidate tables to speed queries on my system, but with this consolidation i will get more overhead, i must solve the less selectivity because i think if the data on tables increase, the query becomes impossible, i see that CREATE INDEX supports FUNCTION, maybe a FUNCTION that join data of two tables can solve this?
about instance configuration it is:
Machine:
Version: '64BIT Kernel'
Version: 'X64/LIX86 7.6.03 Build 007-123-157-515'
Version: 'FAST'
Machine: 'x86_64'
Processors: 2 ( logical: 8, cores: 8 )
data volumes:
ID MODE CONFIGUREDSIZE USABLESIZE USEDSIZE USEDSIZEPERCENTAGE DROPVOLUME TOTALCLUSTERAREASIZE RESERVEDCLUSTERAREASIZE USEDCLUSTERAREASIZE PATH
1 NORMAL 4194304 4194288 379464 9 NO 0 0 0 /db/SPDT/data/data01.dat
2 NORMAL 4194304 4194288 380432 9 NO 0 0 0 /db/SPDT/data/data02.dat
3 NORMAL 4194304 4194288 379184 9 NO 0 0 0 /db/SPDT/data/data03.dat
4 NORMAL 4194304 4194288 379624 9 NO 0 0 0 /db/SPDT/data/data04.dat
5 NORMAL 4194304 4194288 380024 9 NO 0 0 0 /db/SPDT/data/data05.dat
log volumes:
ID CONFIGUREDSIZE USABLESIZE PATH MIRRORPATH
1 51200 51176 /db/SPDT/log/log01.dat ?
parameters:
KERNELVERSION KERNEL 7.6.03 BUILD 007-123-157-515
INSTANCE_TYPE OLTP
MCOD NO
_SERVERDB_FOR_SAP YES
_UNICODE NO
DEFAULT_CODE ASCII
DATE_TIME_FORMAT ISO
CONTROLUSERID DBM
CONTROLPASSWORD
MAXLOGVOLUMES 2
MAXDATAVOLUMES 11
LOG_VOLUME_NAME_001 /db/SPDT/log/log01.dat
LOG_VOLUME_TYPE_001 F
LOG_VOLUME_SIZE_001 6400
DATA_VOLUME_NAME_0005 /db/SPDT/data/data05.dat
DATA_VOLUME_NAME_0004 /db/SPDT/data/data04.dat
DATA_VOLUME_NAME_0003 /db/SPDT/data/data03.dat
DATA_VOLUME_NAME_0002 /db/SPDT/data/data02.dat
DATA_VOLUME_NAME_0001 /db/SPDT/data/data01.dat
DATA_VOLUME_TYPE_0005 F
DATA_VOLUME_TYPE_0004 F
DATA_VOLUME_TYPE_0003 F
DATA_VOLUME_TYPE_0002 F
DATA_VOLUME_TYPE_0001 F
DATA_VOLUME_SIZE_0005 524288
DATA_VOLUME_SIZE_0004 524288
DATA_VOLUME_SIZE_0003 524288
DATA_VOLUME_SIZE_0002 524288
DATA_VOLUME_SIZE_0001 524288
DATA_VOLUME_MODE_0005 NORMAL
DATA_VOLUME_MODE_0004 NORMAL
DATA_VOLUME_MODE_0003 NORMAL
DATA_VOLUME_MODE_0002 NORMAL
DATA_VOLUME_MODE_0001 NORMAL
DATA_VOLUME_GROUPS 1
LOG_BACKUP_TO_PIPE NO
MAXBACKUPDEVS 2
LOG_MIRRORED NO
MAXVOLUMES 14
LOG_IO_BLOCK_COUNT 8
DATA_IO_BLOCK_COUNT 64
BACKUP_BLOCK_CNT 64
_DELAY_LOGWRITER 0
LOG_IO_QUEUE 50
_RESTART_TIME 600
MAXCPU 8
MAX_LOG_QUEUE_COUNT 0
USED_MAX_LOG_QUEUE_COUNT 8
LOG_QUEUE_COUNT 1
MAXUSERTASKS 500
_TRANS_RGNS 8
_TAB_RGNS 8
_OMS_REGIONS 0
_OMS_RGNS 7
OMS_HEAP_LIMIT 0
OMS_HEAP_COUNT 8
OMS_HEAP_BLOCKSIZE 10000
OMS_HEAP_THRESHOLD 100
OMS_VERS_THRESHOLD 2097152
HEAP_CHECK_LEVEL 0
_ROW_RGNS 8
RESERVEDSERVERTASKS 16
MINSERVERTASKS 28
MAXSERVERTASKS 28
_MAXGARBAGE_COLL 1
_MAXTRANS 4008
MAXLOCKS 120080
_LOCK_SUPPLY_BLOCK 100
DEADLOCK_DETECTION 4
SESSION_TIMEOUT 180
OMS_STREAM_TIMEOUT 30
REQUEST_TIMEOUT 5000
_IOPROCS_PER_DEV 2
_IOPROCS_FOR_PRIO 0
_IOPROCS_FOR_READER 0
_USE_IOPROCS_ONLY NO
_IOPROCS_SWITCH 2
LRU_FOR_SCAN NO
_PAGE_SIZE 8192
_PACKET_SIZE 131072
_MINREPLY_SIZE 4096
_MBLOCK_DATA_SIZE 32768
_MBLOCK_QUAL_SIZE 32768
_MBLOCK_STACK_SIZE 32768
_MBLOCK_STRAT_SIZE 16384
_WORKSTACK_SIZE 8192
_WORKDATA_SIZE 8192
_CAT_CACHE_MINSIZE 262144
CAT_CACHE_SUPPLY 131072
INIT_ALLOCATORSIZE 262144
ALLOW_MULTIPLE_SERVERTASK_UKTS NO
_TASKCLUSTER_01 tw;al;ut;2000*sv,100*bup;10*ev,10*gc;
_TASKCLUSTER_02 ti,100*dw;63*us;
_TASKCLUSTER_03 equalize
_DYN_TASK_STACK NO
_MP_RGN_QUEUE YES
_MP_RGN_DIRTY_READ DEFAULT
_MP_RGN_BUSY_WAIT DEFAULT
_MP_DISP_LOOPS 2
_MP_DISP_PRIO DEFAULT
MP_RGN_LOOP -1
_MP_RGN_PRIO DEFAULT
MAXRGN_REQUEST -1
_PRIO_BASE_U2U 100
_PRIO_BASE_IOC 80
_PRIO_BASE_RAV 80
_PRIO_BASE_REX 40
_PRIO_BASE_COM 10
_PRIO_FACTOR 80
_DELAY_COMMIT NO
_MAXTASK_STACK 512
MAX_SERVERTASK_STACK 500
MAX_SPECIALTASK_STACK 500
_DW_IO_AREA_SIZE 50
_DW_IO_AREA_FLUSH 50
FBM_VOLUME_COMPRESSION 50
FBM_VOLUME_BALANCE 10
_FBM_LOW_IO_RATE 10
CACHE_SIZE 262144
_DW_LRU_TAIL_FLUSH 25
XP_DATA_CACHE_RGNS 0
_DATA_CACHE_RGNS 64
XP_CONVERTER_REGIONS 0
CONVERTER_REGIONS 8
XP_MAXPAGER 0
MAXPAGER 64
SEQUENCE_CACHE 1
_IDXFILE_LIST_SIZE 2048
VOLUMENO_BIT_COUNT 8
OPTIM_MAX_MERGE 500
OPTIM_INV_ONLY YES
OPTIM_CACHE NO
OPTIM_JOIN_FETCH 0
JOIN_SEARCH_LEVEL 0
JOIN_MAXTAB_LEVEL4 16
JOIN_MAXTAB_LEVEL9 5
_READAHEAD_BLOBS 32
CLUSTER_WRITE_THRESHOLD 80
CLUSTERED_LOBS NO
RUNDIRECTORY /var/opt/sdb/data/wrk/SPDT
OPMSG1 /dev/console
OPMSG2 /dev/null
_KERNELDIAGFILE knldiag
KERNELDIAGSIZE 800
_EVENTFILE knldiag.evt
_EVENTSIZE 0
_MAXEVENTTASKS 2
_MAXEVENTS 100
_KERNELTRACEFILE knltrace
TRACE_PAGES_TI 2
TRACE_PAGES_GC 20
TRACE_PAGES_LW 5
TRACE_PAGES_PG 3
TRACE_PAGES_US 10
TRACE_PAGES_UT 5
TRACE_PAGES_SV 5
TRACE_PAGES_EV 2
TRACE_PAGES_BUP 0
KERNELTRACESIZE 5369
EXTERNAL_DUMP_REQUEST NO
_AK_DUMP_ALLOWED YES
_KERNELDUMPFILE knldump
_RTEDUMPFILE rtedump
_UTILITY_PROTFILE dbm.utl
UTILITY_PROTSIZE 100
_BACKUP_HISTFILE dbm.knl
_BACKUP_MED_DEF dbm.mdf
_MAX_MESSAGE_FILES 0
_SHMKERNEL 44601
__PARAM_CHANGED___ 0
__PARAM_VERIFIED__ 2008-05-03 23:12:55
DIAG_HISTORY_NUM 2
DIAG_HISTORY_PATH /var/opt/sdb/data/wrk/SPDT/DIAGHISTORY
_DIAG_SEM 1
SHOW_MAX_STACK_USE NO
SHOW_MAX_KB_STACK_USE NO
LOG_SEGMENT_SIZE 2133
_COMMENT
SUPPRESS_CORE YES
FORMATTING_MODE PARALLEL
FORMAT_DATAVOLUME YES
OFFICIAL_NODE
UKT_CPU_RELATIONSHIP NONE
HIRES_TIMER_TYPE CPU
LOAD_BALANCING_CHK 30
LOAD_BALANCING_DIF 10
LOAD_BALANCING_EQ 5
HS_STORAGE_DLL libhsscopy
HS_SYNC_INTERVAL 50
USE_OPEN_DIRECT YES
USE_OPEN_DIRECT_FOR_BACKUP NO
SYMBOL_DEMANGLING NO
EXPAND_COM_TRACE NO
JOIN_TABLEBUFFER 128
SET_VOLUME_LOCK YES
SHAREDSQL YES
SHAREDSQL_CLEANUPTHRESHOLD 25
SHAREDSQL_COMMANDCACHESIZE 262144
MEMORY_ALLOCATION_LIMIT 0
USE_SYSTEM_PAGE_CACHE YES
USE_COROUTINES YES
FORBID_LOAD_BALANCING YES
MIN_RETENTION_TIME 60
MAX_RETENTION_TIME 480
MAX_SINGLE_HASHTABLE_SIZE 512
MAX_HASHTABLE_MEMORY 5120
ENABLE_CHECK_INSTANCE YES
RTE_TEST_REGIONS 0
HASHED_RESULTSET YES
HASHED_RESULTSET_CACHESIZE 262144
CHECK_HASHED_RESULTSET 0
AUTO_RECREATE_BAD_INDEXES NO
AUTHENTICATION_ALLOW
AUTHENTICATION_DENY
TRACE_AK NO
TRACE_DEFAULT NO
TRACE_DELETE NO
TRACE_INDEX NO
TRACE_INSERT NO
TRACE_LOCK NO
TRACE_LONG NO
TRACE_OBJECT NO
TRACE_OBJECT_ADD NO
TRACE_OBJECT_ALTER NO
TRACE_OBJECT_FREE NO
TRACE_OBJECT_GET NO
TRACE_OPTIMIZE NO
TRACE_ORDER NO
TRACE_ORDER_STANDARD NO
TRACE_PAGES NO
TRACE_PRIMARY_TREE NO
TRACE_SELECT NO
TRACE_TIME NO
TRACE_UPDATE NO
TRACE_STOP_ERRORCODE 0
TRACE_ALLOCATOR 0
TRACE_CATALOG 0
TRACE_CLIENTKERNELCOM 0
TRACE_COMMON 0
TRACE_COMMUNICATION 0
TRACE_CONVERTER 0
TRACE_DATACHAIN 0
TRACE_DATACACHE 0
TRACE_DATAPAM 0
TRACE_DATATREE 0
TRACE_DATAINDEX 0
TRACE_DBPROC 0
TRACE_FBM 0
TRACE_FILEDIR 0
TRACE_FRAMECTRL 0
TRACE_IOMAN 0
TRACE_IPC 0
TRACE_JOIN 0
TRACE_KSQL 0
TRACE_LOGACTION 0
TRACE_LOGHISTORY 0
TRACE_LOGPAGE 0
TRACE_LOGTRANS 0
TRACE_LOGVOLUME 0
TRACE_MEMORY 0
TRACE_MESSAGES 0
TRACE_OBJECTCONTAINER 0
TRACE_OMS_CONTAINERDIR 0
TRACE_OMS_CONTEXT 0
TRACE_OMS_ERROR 0
TRACE_OMS_FLUSHCACHE 0
TRACE_OMS_INTERFACE 0
TRACE_OMS_KEY 0
TRACE_OMS_KEYRANGE 0
TRACE_OMS_LOCK 0
TRACE_OMS_MEMORY 0
TRACE_OMS_NEWOBJ 0
TRACE_OMS_SESSION 0
TRACE_OMS_STREAM 0
TRACE_OMS_VAROBJECT 0
TRACE_OMS_VERSION 0
TRACE_PAGER 0
TRACE_RUNTIME 0
TRACE_SHAREDSQL 0
TRACE_SQLMANAGER 0
TRACE_SRVTASKS 0
TRACE_SYNCHRONISATION 0
TRACE_SYSVIEW 0
TRACE_TABLE 0
TRACE_VOLUME 0
CHECK_BACKUP NO
CHECK_DATACACHE NO
CHECK_KB_REGIONS NO
CHECK_LOCK NO
CHECK_LOCK_SUPPLY NO
CHECK_REGIONS NO
CHECK_TASK_SPECIFIC_CATALOGCACHE NO
CHECK_TRANSLIST NO
CHECK_TREE NO
CHECK_TREE_LOCKS NO
CHECK_COMMON 0
CHECK_CONVERTER 0
CHECK_DATAPAGELOG 0
CHECK_DATAINDEX 0
CHECK_FBM 0
CHECK_IOMAN 0
CHECK_LOGHISTORY 0
CHECK_LOGPAGE 0
CHECK_LOGTRANS 0
CHECK_LOGVOLUME 0
CHECK_SRVTASKS 0
OPTIMIZE_AGGREGATION YES
OPTIMIZE_FETCH_REVERSE YES
OPTIMIZE_STAR_JOIN YES
OPTIMIZE_JOIN_ONEPHASE YES
OPTIMIZE_JOIN_OUTER YES
OPTIMIZE_MIN_MAX YES
OPTIMIZE_FIRST_ROWS YES
OPTIMIZE_OPERATOR_JOIN YES
OPTIMIZE_JOIN_HASHTABLE YES
OPTIMIZE_JOIN_HASH_MINIMAL_RATIO 1
OPTIMIZE_OPERATOR_JOIN_COSTFUNC YES
OPTIMIZE_JOIN_PARALLEL_MINSIZE 1000000
OPTIMIZE_JOIN_PARALLEL_SERVERS 0
OPTIMIZE_JOIN_OPERATOR_SORT YES
OPTIMIZE_QUAL_ON_INDEX YES
DDLTRIGGER YES
SUBTREE_LOCKS NO
MONITOR_READ 2147483647
MONITOR_TIME 2147483647
MONITOR_SELECTIVITY 0
MONITOR_ROWNO 0
CALLSTACKLEVEL 0
OMS_RUN_IN_UDE_SERVER NO
OPTIMIZE_QUERYREWRITE OPERATOR
TRACE_QUERYREWRITE 0
CHECK_QUERYREWRITE 0
PROTECT_DATACACHE_MEMORY NO
LOCAL_REDO_LOG_BUFFER_SIZE 0
FILEDIR_SPINLOCKPOOL_SIZE 10
TRANS_HISTORY_SIZE 0
TRANS_THRESHOLD_VALUE 60
ENABLE_SYSTEM_TRIGGERS YES
DBFILLINGABOVELIMIT 70L80M85M90H95H96H97H98H99H
DBFILLINGBELOWLIMIT 70L80L85L90L95L
LOGABOVELIMIT 50L75L90M95M96H97H98H99H
AUTOSAVE 1
BACKUPRESULT 1
CHECKDATA 1
EVENT 1
ADMIN 1
ONLINE 1
UPDSTATWANTED 1
OUTOFSESSIONS 3
ERROR 3
SYSTEMERROR 3
DATABASEFULL 1
LOGFULL 1
LOGSEGMENTFULL 1
STANDBY 1
USESELECTFETCH YES
USEVARIABLEINPUT NO
UPDATESTAT_PARALLEL_SERVERS 0
UPDATESTAT_SAMPLE_ALGO 1
SIMULATE_VECTORIO IF_OPEN_DIRECT_OR_RAW_DEVICE
COLUMNCOMPRESSION YES
TIME_MEASUREMENT NO
CHECK_TABLE_WIDTH NO
MAX_MESSAGE_LIST_LENGTH 100
SYMBOL_RESOLUTION YES
PREALLOCATE_IOWORKER NO
CACHE_IN_SHARED_MEMORY NO
INDEX_LEAF_CACHING 2
NO_SYNC_TO_DISK_WANTED NO
SPINLOCK_LOOP_COUNT 30000
SPINLOCK_BACKOFF_BASE 1
SPINLOCK_BACKOFF_FACTOR 2
SPINLOCK_BACKOFF_MAXIMUM 64
ROW_LOCKS_PER_TRANSACTION 50
USEUNICODECOLUMNCOMPRESSION NO
about send you the data from tables, i dont have permission to do that, since all data is in a production system, the customer dont give me the rights to send any information. sorry about that.
best regards
Clóvis -
Query Favourites copy from Dev to Prd System
Hello,
Is there is any option or program or FM to copy Query Favourites ?
we have two BW and BI Systems,
My customer is keeping some of 30 querys in favorites in one BW System and now we need to copy that same queries in BI System, Any possibiliteis ?
Regards,
Ranga
Edited by: Ranga123 on Aug 10, 2010 12:36 PMHi
You can use the Function Module FAVOS_EVENT_ADD_TO_USER_SHELF for this. SAP_GUID is the WBID, TEXT is the fav text and REPORT_NAME as RRMX
To add new folders
You can either use BEx Browser or login into your BW system using SAP Logon, go to your sap easy access menu (TCODE:SMEN), right click the folders you want to manage and add your subfolders.
Also go through these
For
Working with reusable items
http://help.sap.com/saphelp_nw04s/helpdata/en/4e/0f813b420ce60ee10000000a114084/frameset.htm
Santosh -
ADF how to display a processing page when executing large queries
ADF how to display a processing page when executing large queries
The ADF application that I have written currently has the following structure:
DataPage (search.jsp) that contains a form that the user enters their search criteria --> forward action (doSearch) --> DataAction (validate) that validates the inputted values --> forward action (success) --> DataAction (performSearch) that has a refresh method dragged on it, and an action that manually sets the itterator for the collection to -1 --> forward action (success) --> DataPage (results.jsp) that displays the results of the then (hopefully) populated collection.
I am not using a database, I am using a java collection to hold the data and the refresh method executes a query against an Autonomy Server that retrieves results in XML format.
The problem that I am experiencing is that sometimes a user may submit a query that is very large and this creates problems because the browser times out whilst waiting for results to be displayed, and as a result a JBO-29000 null pointer error is displayed.
I have previously got round this using Java Servlets where by when a processing servlet is called, it automatically redirects the browser to a processing page with an animation on it so that the user knows something is being processed. The processing page then recalls the servlet every 3seconds to see if the processing has been completed and if it has the forward to the appropriate results page.
Unfortunately I can not stop users entering large queries as the system requires users to be able to search in excess of 5 million documents on a regular basis.
I'd appreciate any help/suggestions that you may have regarding this matter as soon as possible so I can make the necessary amendments to the application prior to its pilot in a few weeks time.Hi Steve,
After a few attempts - yes I have a hit a few snags.
I'll send you a copy of the example application that I am working on but this is what I have done so far.
I've taken a standard application that populates a simple java collection (not database driven) with the following structure:
DataPage --> DataAction (refresh Collection) -->DataPage
I have then added this code to the (refreshCollectionAction) DataAction
protected void invokeCustomMethod(DataActionContext ctx)
super.invokeCustomMethod(ctx);
HttpSession session = ctx.getHttpServletRequest().getSession();
Thread nominalSearch = (Thread)session.getAttribute("nominalSearch") ;
if (nominalSearch == null)
synchronized(this)
//create new instance of the thread
nominalSearch = new ns(ctx);
} //end of sychronized wrapper
session.setAttribute("nominalSearch", nominalSearch);
session.setAttribute("action", "nominalSearch");
nominalSearch.start();
System.err.println("started thread calling loading page");
ctx.setActionForward("loading.jsp");
else
if (nominalSearch.isAlive())
System.err.println("trying to call loading page");
ctx.setActionForward("loading.jsp");
else
System.err.println("trying to call results page");
ctx.setActionForward("success");
Created another class called ns.java:
package view;
import oracle.adf.controller.struts.actions.DataActionContext;
import oracle.adf.model.binding.DCIteratorBinding;
import oracle.adf.model.generic.DCRowSetIteratorImpl;
public class ns extends Thread
private DataActionContext ctx;
public ns(DataActionContext ctx)
this.ctx = ctx;
public void run()
System.err.println("START");
DCIteratorBinding b = ctx.getBindingContainer().findIteratorBinding("currentNominalCollectionIterator");
((DCRowSetIteratorImpl)b.getRowSetIterator()).rebuildIteratorUpto(-1);
//b.executeQuery();
System.err.println("END");
and added a loading.jsp page that calls a new dataAction called processing every second. The processing dataAction has the following code within it:
package view;
import javax.servlet.http.HttpSession;
import oracle.adf.controller.struts.actions.DataForwardAction;
import oracle.adf.controller.struts.actions.DataActionContext;
public class ProcessingAction extends DataForwardAction
protected void invokeCustomMethod(DataActionContext actionContext)
// TODO: Override this oracle.adf.controller.struts.actions.DataAction method
super.invokeCustomMethod(actionContext);
HttpSession session = actionContext.getHttpServletRequest().getSession();
String action = (String)session.getAttribute("action");
if (action.equalsIgnoreCase("nominalSearch"))
actionContext.setActionForward("refreshCollection.do");
I'd appreciate any help or guidance that you may have on this as I really need to implement a generic loading page that can be called by a number of actions within my application as soon as possible.
Thanks in advance for your help
David.
Maybe you are looking for
-
Apps wont update, cannot connect to itunes
i have 2 apps that were deleted and reinstalled to my ipod. After reinstalling from the computer, there were updates on both apps. Tried to update but it just says waiting. one time i got the message "cannot connect to itunes' help please
-
How can I buy an iPhone as a gift for my brazilian friend?
How do you buy an iPhone in USA as a gift for a Brazilian friend?
-
Hi gurus Can anybody guide me how to check the stock for semifinish & raw materials used for this semifinished product in a single T Code. ( UR.QC,Blocked) Thanks Umesh
-
Multiple Image inputs in Blendmode Shader
I'm using 3 input images in a kernel destined to be used as a Blendmode.Shader. As the documentation states, I have the background image as the first input, the foreground image (to whose bitmap I apply the Blendmode) as the second input, and a third
-
Maintaining different no ranges for different backend system
Hi Our SRM system is connected to 2 backend systems. Requirement is to maintain shopping cart nos differently for each system.Should I create a new transaction type SH1 and SH2 and maintain the R/3 nos there OR Is there any other option