Issue on query
Hi
I have deleted some queries in development and transported it to Quality.
I could see the deleted objects in Query designer, even though it was deleted and transported from dev.
i could not see the queries at the cube level, if i drop the down the cube, which should ideally have all queries developed under it, is not available, but its weierd that i can view them in designer. i refreshed it but still could see the queries.
please help
Regards
Edited by: Vasavi Yagnala on Dec 2, 2009 1:10 PM
When your viewing the queries, are you matching the ELTUIDs (system generated Query Element id), which is the key for queries, or are you looking at the Technical Name of the queries. Unfortunately, multiple queries can be created with the same Technical Name but each query will have a unique ELTUID.
So, is it possible that queries with the same Technical Name were created in your DEV environment and transported to QA, or created directly in the QA environment? You may have to delete directly from QA too.
Similar Messages
-
Hello Gurus,
I am having an issue with query output.Please find below the details of it
Unsold claims: Total---> 200
Dispatch: Total--->118,086
Unsold claims/dispatch--->0.001694
This should be the output but its coming --.0033120 because itu2019s dividing with max value in dispatch which is 60,386.
Reason: The aggregation property of Dispatch is max value. I am not supposed to do any changes in Modeling. So can we do any thing at reporting level to get the desired output?
Desired output---->0.001694
Output coming---->.0033120
Please help me in getting the desired output at query level.
Regards,solved with the help of nested exception aggregation
-
View Link Editor Issue - empty query clause if based on association
JDeveloper 10.1.2
Setup:
- entities A and B
- a 1 to * association from entity A to entity B (A_B_association)
- a view of A based on entity A (AView)
- a view of B based on entity B (BView)
- BView has an expert mode query
Create a view link from table A to table B:
- choose the A_B_association under AView for the source attribute
- choose the A_B_association under BView for the destination attribute
- click add
- click next
issue: the query clause is empty
- click next:
info dialog pops up;
title = "Business Components"
message = "Restoring the default where clause."
Note, however, that creating a view link based on the attributes works. That is, if I choose the primary key attribute under AView for the source attribute, and the corresponding foreign key attribute under BView for the destination attribute, then the wizard generates the expected query clause.
If I create a new project and recreate the entities, the views, and the association, then I can successfully create a view link based on the association. The foreignKey value on the association end was different in the new association XML file. I modifed the old XML file so that it used the same foreignKey element, but that did not seem to work.
I am guessing that this is some sort of user error on my part or that we have otherwise managed to squat up our XML files.
Any tips, hints, or ideas appreciated.
Thanks,
SteveThe problem is in AView.
We had modified the AView at one point, then reverted it to a default view of entity A. Here, "a default view" means that we undid our custom changes (apparently not thoroughly enough); that is, we did not delete it and create a default view from scratch using the contextual menu on the A entity.
The bottom line is that the AView attributes were not being mapped to the entity -- they were showing up as calculated attributes.
We created a "New Default View Object..." from the contextual menu of entity A, then manually corrected the AView.xml file to solve the problem. -
Performance issues with query input variable selection in ODS
Hi everyone
We've upgraded from BW 3.0B to NW04s BI using SP12.
There is a problem encountered with input variable selection. This happens regardless of using BEx (new or old 3.x) or using RSRT. When using the F4 search help (or "Select from list" in BEx context) to list possible values, this takes forever for large ODS (containing millions of records).
Using ST01 and SM50 to trace the code in the same query, we see a difference here:
<u>NW04s BI SQL command</u>
SELECT
"P0000"."COMP_CODE" AS "0000000032" ,"T0000"."TXTMD" AS "0000000032_TXTMD"
FROM
( "/BI0/PCOMP_CODE" "P0000" ) LEFT OUTER JOIN "/BI0/TCOMP_CODE" "T0000" ON "P0000"."COMP_CODE" = "T0000
"."COMP_CODE"
WHERE
"P0000"."OBJVERS" = 'A' AND "P0000"."COMP_CODE" IN ( SELECT "O"."COMP_CODE" AS "KEY" FROM
"/BI0/APY_PP_C100" "O" )
ORDER BY
"P0000"."COMP_CODE" ASC#
<u>BW 3.0B SQL command:</u>
SELECT ROWNUM < 500 ....
In 3.0B, rownum is limited to 500 and this results in a speedy, though limited query. In the new NW04s BI, this renders the selection screen unusable as ABAP dumps for timing out will occur first due to the large data volume searched using sequential read.
It will not be feasible to create indexes for every single query selection parameter (issues with oerformance when loading, space required etc.). Is there a reason why SAP seems have fallen back on a less effective code for this?
I have tried to change the number of selected rows to <500 in BEx settings but one must reach a responsive screen in order to get to that setting and it is not always possible or saved for the next run.
Anyone with similar experience or can provide help on this?here is a reason why the F4 help on ODS was faster in BW 3.x.
In BW 3.x the ODS did not support the read mode "Only values in
InfoProvider". So If I compare the different SQL statements I propose
to change the F4 mode in the InfoProvider specific properties to
"About master data". This is the fastest F4 mode.
As an alternative you can define indexes on your ODS to speed up F4.
So would need a non-unique index on InfoObject 0COMP_CODE in your ODS
Check below for insights
https://forums.sdn.sap.com/click.jspa?searchID=6224682&messageID=2841493
Hope it Helps
Chetan
@CP.. -
Issue while query execution on web analyser.
Hi,
I am getting an error message while query execution on web ie Record set too large , data retrieval restricted by configuration .I am able to run the same query in bex analyser without any issue .Any idea what could be the reason and solution for this issue .
Regards,
Neetika.Hi Neetika,
The Query is exceeding the set limits,i suggest you to Reduce the time LIne for the Query, as it may be having more number of Cells in terms of Rows and Columns.
Execute the Query for Less number of Days,if u r executing it for 1 Month then execute it for 10 Days.
Rgds
SVU123 -
Performance issue on query. Help needed.
This is mainly a performance issue. I hope someone can help me on this.
Basically I have four tables Master (150000 records), Child1 (100000+ records), Child2 (50 million records !), Child 3 (10000+ records)
(please pardon the aliases).
Now every record in master has more than one corresponding record in each of the child tables (one to many).
Also there may not be any record in any or all of the tables for a particular master record.
Now, I need to fetch the max of last_updated_date for every master record in each of the 3 child tables and then find the maximum of
the three last_active_dates obtained from the 3 tables.
eg: for Master ID 100, I need to query Child1 for all the records of Master ID 100 and get the max last_updated_date.
Same for the other 2 tables and then get the maximum of these three values.
(I also need to take care of cases where no record may be found in a child table for a Master ID)
Writing a procedure that uses cursors that fetches the value from each of the child table hits performance
badly. And thing is I need to find out the last_updated_date for every Master record (all 150000 of them). It'll probably take days to do this.
SELECT MAX (C1.LAST_UPDATED_DATE)
,MAX (C2.LAST_UPDATED_DATE)
,MAX (C3.LAST_UPDATED_DATE)
FROM CHILD1 C1
,CHILD2 C2
,CHILD3 C3
WHERE C1.MASTER_ID = 100
OR C2.MASTER_ID = 100
OR C3.MASTER_ID = 100
I tried the above but I got a temp tablespace error. I don't think the query is good enough at all.
(The OR clause is to take care of no records in any child table. If there's an AND, then the join and hence select will
fail even if there is no record in one child table but valid values in the other 2 tables).
Thanks a lot.
Edited by: user773489 on Dec 16, 2008 11:49 AMNot sure I understand the problem. The max you are getting from the above is already the greatest out of the three - that's why we do the UNION ALL.
Here's sample code without output, maybe this will clear it up:
with a as (
select 10 MASTER_ID, to_date('12/15/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 20 MASTER_ID, to_date('12/01/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 30 MASTER_ID, to_date('12/02/2008', 'MM/DD/YYYY') LAST_DTE from dual
b as (
select 10 MASTER_ID, to_date('12/14/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 20 MASTER_ID, to_date('12/02/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 40 MASTER_ID, to_date('11/15/2008', 'MM/DD/YYYY') LAST_DTE from dual
c as (
select 10 MASTER_ID, to_date('12/07/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 30 MASTER_ID, to_date('11/29/2008', 'MM/DD/YYYY') LAST_DTE from dual UNION ALL
select 40 MASTER_ID, to_date('12/13/2008', 'MM/DD/YYYY') LAST_DTE from dual
select MASTER_ID, MAX(LAST_DTE)
FROM
(select MASTER_ID, LAST_DTE from a UNION ALL
select MASTER_ID, LAST_DTE from b UNION ALL
select MASTER_ID, LAST_DTE from c)
group by MASTER_ID;
MASTER_ID MAX(LAST_DTE)
30 02-DEC-08
40 13-DEC-08
20 02-DEC-08
10 15-DEC-08
4 rows selectedEdited by: tk-7381344 on Dec 16, 2008 12:38 PM -
Issue with Query on a virtual infoprovider
Hello,
I am getting the following error message while executing a query on a virtual infoprovider. We have recently gone through upgrade from BI 3.5 to BI 7.0 EHP1 (SP5) and from SEM BCS 4.0 to BCS 6.0.
EVersion not specified or not unique UCR0 006
EError reading the data of InfoProvider ZBCS_CV11 DBMAN 305ZBCS_CV11
EError while reading data; navigation is possible BRAIN 289
I>> Row: 11 Inc: RAISE_READ_ERROR Prog: CL_RSDRV_VPROV_BASE RS_EXCEPTION 301CL_RSDRV_VPROV_BASE
this query has been running fine before the upgrade. In the selection screen there are two fields - version1 and version2, if I am specifying same value in both the fields then the query runs fine and if I am providing different values then the above stated error message appears.
I have tried different settings with properties - Read mode: H,A,X and also different combination from the properties of the virtual infoprovider - with and w/o hierarchies; with and w/o navigation attributes but it did not workout.
the only thing which has changed on this virtual provider is that I had enabled delta caching as this was supposed to be used in a multiprovider.
Has anyone experienced similar issue or have an idea as to what is going wrong here. Please advice.
Regards,
ManishHi Manish,
I have exactly the same issue with a query on a virtual infoprovider after upgrading from BI 3.5 to BI 7.0 EHP1 (SP5) and from SEM BCS 4.0 to BCS 6.0.
Would you be so kind to tell me how you fixed this. (other queries seem to be working)
Kind regards,
Jamie Flaxman -
Issue with query for AR transactions posted to GL
Hi all,
I'm using Oracle R12.1.3.
I have a report similar to Account Analysis Report which displays Transactions posted to GL.
I have the following issue:
In the result of the report if an AR transaction has 2 lines or more I get multiplication of them. So my question is how can I identify which AR transaction line is linked to GL line number?
Here is my query:
SELECT DISTINCT GJH.JE_HEADER_ID,
GJL.JE_LINE_NUM,
PARTY.PARTY_NAME CUSTOMER_VENDOR,
RCT.TRX_NUMBER TRANS_NUMBER,
SUBSTR(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(SUBSTR(CTL.DESCRIPTION, 1, 2000), CHR(13), ''), CHR(10), ''), CHR(9), ''), CHR(11), ''), CHR(12), ''), 1, 100) DESCRIPTION,
NVL(xal.entered_dr, 0) - NVL(xal.entered_cr, 0) amount,
CTL.*
FROM GL.GL_JE_HEADERS GJH,
GL.GL_JE_LINES GJL,
GL.GL_CODE_COMBINATIONS GCC,
GL.GL_PERIODS GLP,
GL.GL_IMPORT_REFERENCES IMP,
XLA.XLA_AE_LINES XAL,
XLA.XLA_AE_HEADERS XAH,
XLA.XLA_EVENTS XE,
XLA.XLA_TRANSACTION_ENTITIES XTE,
RA_CUSTOMER_TRX_ALL RCT,
HZ_PARTIES PARTY,
AR.HZ_CUST_ACCOUNTS CA,
GL_CODE_COMBINATIONS_KFV CC,
AR.RA_CUSTOMER_TRX_LINES_ALL CTL,
AR.RA_CUST_TRX_LINE_GL_DIST_ALL CTLD
WHERE 1 = 1
AND GJH.JE_HEADER_ID = GJL.JE_HEADER_ID
-- AND GJL.STATUS || '' = 'P'
AND GCC.CODE_COMBINATION_ID = CTLD.CODE_COMBINATION_ID
AND GJH.PERIOD_NAME = GLP.PERIOD_NAME
AND RCT.CUSTOMER_TRX_ID = CTLD.CUSTOMER_TRX_ID
AND CTLD.CUSTOMER_TRX_LINE_ID = CTL.CUSTOMER_TRX_LINE_ID
AND ctld.customer_trx_id = RCT.CUSTOMER_TRX_ID
-- AND GLP.ADJUSTMENT_PERIOD_FLAG <> 'Y'
AND GJH.JE_SOURCE = 'Receivables'
AND GJL.JE_HEADER_ID = IMP.JE_HEADER_ID
AND GJL.JE_LINE_NUM = IMP.JE_LINE_NUM
AND IMP.GL_SL_LINK_ID = XAL.GL_SL_LINK_ID
AND IMP.GL_SL_LINK_TABLE = XAL.GL_SL_LINK_TABLE
AND XAL.APPLICATION_ID = XAH.APPLICATION_ID
AND XAL.AE_HEADER_ID = XAH.AE_HEADER_ID
AND XAH.APPLICATION_ID = XE.APPLICATION_ID
AND XAH.EVENT_ID = XE.EVENT_ID
AND XE.APPLICATION_ID = XTE.APPLICATION_ID
AND XTE.APPLICATION_ID = 222
AND XE.ENTITY_ID = XTE.ENTITY_ID
AND XTE.ENTITY_CODE = 'TRANSACTIONS'
AND XTE.SOURCE_ID_INT_1 = RCT.CUSTOMER_TRX_ID
AND RCT.BILL_TO_CUSTOMER_ID = CA.CUST_ACCOUNT_ID
AND CA.PARTY_ID = PARTY.PARTY_ID
AND rcT.CUSTOMER_TRX_ID = ctl.CUSTOMER_TRX_Id
AND CTL.LINE_TYPE = 'LINE'
AND XAL.CODE_COMBINATION_ID = CC.CODE_COMBINATION_ID
AND RCT.CUSTOMER_TRX_ID = 8857929
AND GJL.JE_LINE_NUM = 8866
Any ideas?
Thanks in advance,
Stoyanov.Hi Stoyanov,
Please try using the table xla_distribution_links to join with ra_cust_trx_line_gl_dist_all by using
xla_distribution_links.source_distribution_id_num_1 = ra_cust_trx_line_gl_dist_all.cust_trx_line_gl_dist_id
The below link gives the join conditions for various sub ledger types :
Techincal: R12 SLA Tables connection to AP, AR, INV,Payments, Receiving
Hope this helps.
Regards,
Manjusha. -
Issue in query panel LOV order by
Hi All,
I am using Jdev 11.1.1.6 version and Weblogic 10.3.6 version. I am facing an issue which is explained below. I have a screen with query panel and results table.
For Ex. Say i have Name in the results table and search paramater as well. Created Name with the help of a view object with search parameter as view criteria with Order by Name. Name search paramater is created as SelectOneChoice(LOV) . Also from my first page Add button can be clicked. On clicking the button new Name value can be added. On successfully saving the New value it is getting refreshed in the table results and displaying in the correct order. But in LOV the latest value is getting added at the Last. Rest of the values are Ordered by Name.
Is it possible to display the values in the correct order in LOV after new values are getting added?
I have tried the following.
1. Refreshing the View Object with Search Criteria.
2. Refreshing the LOV View Object.
3. Refreshing the Whole Page.
4. Order by in View Accessor.
5. Refreshing the View Accessor.
Note: When i totally reload the page again then it is getting refelected correctly.
Thanks.
Regards,
Deena.Maybe you can try to reset QueryModel, something like this:
QueryModel queryModel = queryComponent.getModel();
QueryDescriptor queryDescriptor = queryComponent.getValue();
queryModel.reset(queryDescriptor);
queryComponent.refresh(FacesContext.getCurrentInstance());
AdfFacesContext.getCurrentInstance().addPartialTarget(queryComponent);Sorry, I don't have any more ideas :(
Dario -
Goods Issue - SQL Query will not sum
I have 3 Goods Issue documents with Document Total for each document 10,000 / 20,000 / 30,000 subsequently.
I want to make query so that it will display like below - where it shows BOTH each of 3 document value (10000/2000/30000) AND the sum of the 3 documents (60000) like below.
Doc 1 - 10,000
Doc 2 - 20,000
Doc 3 - 30,000
Total = 60,000
In addition, I would like the ability to choose date range. Basically, something like
SELECT Document_Total
FROM Goods_Issue table
WHERE the_document_date is between 1-SEP-2011 and 31-SEP-2011
AND the_reference_is _________________
I have tried many SQL queries, but it displayed either:
Doc 1 - 10,000
Doc 2 - 20,000
Doc 3 - 30,000
OR
Total = 60,000
Please help.@Hendry Wijaya @GordonDu @malhaar
Thanks for helps. The SQL query you provided solved 99% of the problem. I just need to make a tweak on the SQL so that it could display the total on the footer. See screenshot below - I uploaded the pics at imageshack as I don't find a way to attach pics in here.
[See here - Total_at_Footer|http://i129.photobucket.com/albums/p213/whitesnowbear/AAAA/Untitled-2.jpg]
Thanks a bunch. -
Issue in query my order in Order Management
Hi,
I customized the OM module to give credit card information in third party site, for that i assigned dummy values for credit card and card holder name etc in the OM. to open third party site. Now my issue is if i query my order in OM dummy values is dispalyed instead of values given in third party window.
I check the following API's IBY_FNDCPT_SETUP_PUB, IBY_INSTRREG_PUB and IBY_CREDITCARD_PKG.
Istore
I got entries in the IBY_FNDCPT_SETUP_PUB for my istore new card order. Now my issue is if i query that order in the OM my credit card no,card holder name expiration date not displayed instead of dummy values which i assigned is displaying but the details are updated in the lov for that order.
If i do authorization for existing credit card in istore. And query my order in the OM my credit card no,card holder name expiration date are displaying correctly. Now i need solution to fix the problem so help me to fix it.
Regards
KhanMaybe you can try to reset QueryModel, something like this:
QueryModel queryModel = queryComponent.getModel();
QueryDescriptor queryDescriptor = queryComponent.getValue();
queryModel.reset(queryDescriptor);
queryComponent.refresh(FacesContext.getCurrentInstance());
AdfFacesContext.getCurrentInstance().addPartialTarget(queryComponent);Sorry, I don't have any more ideas :(
Dario -
Question/issue regarding querying for uncommited objects in Toplink...
Hi, was hoping to get some insight into this problem we are encoutering
We have this scenario were we are creating a folder hierarchy (using Toplink)
1. a parent folder is created
2. child elements are created (in the same transaction as step 1),
3. we need to lookup the parent folder and assign it as the parent
of these child elements
4. end the transaction and commit all data
In our system we control access to objects by appending a filter to the selection criteria, so we end up with SQL like this example
(The t2 stuff is the authorization lookup part of the query.) ;
SELECT t0.ID, t0.CLASS_NAME, t0.DESCRIPTION, t0.EDITABLE,
t0.DATE_MODIFIED, t0.DATE_CREATED,
t0.MODIFIED_BY, t0.ACL_ID, t0.NAME, t0.CREATED_BY,
t0.TYPE_ID, t0.WKSP_ID, t1.ID, t1.LINK_SRC_PATH,
t1.ABSOLUTE_PATH, t1.MIME_TYPE, t1.FSIZE,
t1.CONTENT_PATH, t1.PARENT_ID
FROM XDOOBJECT t0, ALL_OBJECT_PRIVILEGES t2,
ARCHIVEOBJECT t1
WHERE ((((t1.ABSOLUTE_PATH = '/favorites/twatson2')
AND ((t1.ID = t2.xdoobject_id)
AND ((t2.user_id = 'twatson2')
AND (bitand(t2.privilege, 2) = 2))))
AND (t1.ID = t0.ID))
AND (t0.CLASS_NAME = 'oracle.xdo.server.repository.model.Archivable'))
When creating new objects we also create the authorization lookup record (which is inserted into a different table.) I can see all the objects are registered in the UOW identity map.
Basically, the issue is that this scenario all occurs in a single transaction and when querying for the newly created parent folder, if the authorization filter is appended to the query, the parent is not found. If I remove the authorization filter then the parent is found correctly. Or if I break this up into separate transactions and commit after each insert, then the parent is found correctly.
I use the conformResultsInUnitOfWork attribute on the queries.
This is related to an earlier thread I have in this discussion forum;
Nested UnitOfWork and reading newly created objects...
Thanks for any help you can provide,
-TimHi Doug, we add the authorization filter directly in the application code as the query is getting set up.
Here are some code examples; 1) the first is the code to create new object in the system, followed by 2) the code to create a new authorization lookup record (which also uses the first code to do the actual Toplink insert), then 3) an example of a read query where the authorization filter is appended to the Expression and after that 4) several helper methods.
I hope this is of some use as it's difficult to show the complete flow in a simple example.
1)
// create new object example
public Object DataAccess.createObject(....
Object result = null;
boolean inTx = true;
UnitOfWork uow = null;
try
SessionContext sc = mScm.getCurrentSessionContext();
uow = TLTransactionManager.getActiveTransaction(sc.getUserId());
if (uow == null)
Session session = TLSessionFactory.getSession();
uow = session.acquireUnitOfWork();
inTx = false;
Object oclone = (Object) uow.registerObject(object);
uow.assignSequenceNumbers();
if (oclone instanceof BaseObject)
BaseObject boclone = (BaseObject)oclone;
Date now = new Date();
boclone.setCreated(now);
boclone.setModified(now);
boclone.setModifiedBy(sc.getUserId());
boclone.setCreatedBy(sc.getUserId());
uow.printRegisteredObjects();
uow.validateObjectSpace();
if (inTx == false) uow.commit();
//just temp, see above
if (true == authorizer.requiresCheck(oclone))
authorizer.grantPrivilege(oclone);
result = oclone;
2)
// Authorizer.grantPrivilege method
public void grantPrivilege(Object object) throws DataAccessException
if (requiresCheck(object) == false)
throw new DataAccessException(
"Object does not implement Securable interface.");
Securable so = (Securable)object;
ModulePrivilege[] privs = so.getDefinedPrivileges();
BigInteger pmask = new BigInteger("0");
for (int i = 0; i < privs.length; i++)
BigInteger pv = PrivilegeManagerFactory.getPrivilegeValue(privs);
pmask = pmask.add(pv);
SessionContext sc = mScm.getCurrentSessionContext();
// the authorization lookup record
ObjectUserPrivilege oup = new ObjectUserPrivilege();
oup.setAclId(so.getAclId());
oup.setPrivileges(pmask);
oup.setUserId(sc.getUserId());
oup.setXdoObjectId(so.getId());
try
// this recurses back to the code snippet from above
mDataAccess.createObject(oup, this);
catch (DataAccessException dae) {
Object[] args = {dae.getClass().toString(), dae.getMessage()};
logger.severe(MessageFormat.format(EXCEPTION_MESSAGE, args));
throw new DataAccessException("Failed to grant object privilege.", dae);
3)
// example Query code
Object object = null;
ExpressionBuilder eb = new ExpressionBuilder();
Expression exp = eb.get(queryKeys[0]).equal(keyValues[0]);
for (int i = 1; i < queryKeys.length; i++)
exp = exp.and(eb.get(queryKeys[i]).equal(keyValues[i]));
// check if need to add authorization filter
if (authorizer.requiresCheck(domainClass) == true)
// this is where the authorization filter is appended to query
exp = exp.and(appendReadFilter());
ReadObjectQuery query = new ReadObjectQuery(domainClass, exp);
SessionContext sc = mScm.getCurrentSessionContext();
if (TLTransactionManager.isInTransaction(sc.getUserId()))
// part of a larger transaction scenario
query.conformResultsInUnitOfWork();
else
// not part of a transaction
query.refreshIdentityMapResult();
query.cascadePrivateParts();
Session session = getSession();
object = session.executeQuery(query);
4)
// builds the authorzation filter
private Expression appendReadFilter()
ExpressionBuilder eb = new ExpressionBuilder();
Expression exp1 = eb.getTable("ALL_OBJECT_PRIVILEGES").getField("xdoobject_id");
Expression exp2 = eb.getTable("ALL_OBJECT_PRIVILEGES").getField("user_id");
Expression exp3 = eb.getTable("ALL_OBJECT_PRIVILEGES").getField("privilege");
Vector args = new Vector();
args.add(READ_PRIVILEGE_VALUE);
Expression exp4 =
exp3.getFunctionWithArguments("bitand",args).equal(READ_PRIVILEGE_VALUE);
SessionContext sc = mScm.getCurrentSessionContext();
return eb.get("ID").equal(exp1).and(exp2.equal(sc.getUserId()).and(exp4));
// helper to get Toplink Session
private Session getSession() throws DataAccessException
SessionContext sc = mScm.getCurrentSessionContext();
Session session = TLTransactionManager.getActiveTransaction(sc.getUserId());
if (session == null)
session = TLSessionFactory.getSession();
return session;
// method of TLTransactionManager, provides easy access to TLSession
// which handles Toplink Sessions and is a singleton
public static UnitOfWork getActiveTransaction(String userId)
throws DataAccessException
TLSession tls = TLSession.getInstance();
return tls.getTransaction(userId);
// the TLSession method, returns the active transaction (UOW)
// or null if none
public UnitOfWork getTransaction(String uid) {
UnitOfWork uow = null;
UowWrapper uw = (UowWrapper)mTransactions.get(uid);
if (uw != null) {
uow = uw.getUow();
return uow;
Thanks!
-Tim -
Navigational attribute issue in query
Hi gurus,
Here is a very tricky issue that I am facing. If anyone could please help me with a workaround for the same.
I have an infoObject called ZDIV_CUS which is basically 0CUSTOMER with 0DIVISION compounded to it. Hence, when I load data into ZDIV_CUS, I would have one record for a unique combination of customer and division. In functional words, I would have one customer maintained for multiple divisions. ZDIV_CUS also has other navigational attributes as 0SALES_OFF, 0SALES_DIST & 0SALES_GRP. Also, I have 0MATERIAL, which has 0DIVISION as a navigational attribute in my cube.
Hence the master data for ZDIV_CUS looks as follows -
0DIVISION 0CUSTOMER 0SALES_OFF 0SALES_GRP 0SALES_DIST
01 100000 L001 L01 L010100
02 100000 L005 L08 L020150
01 100001 L001 L01 L050150
03 100001 L005 L02 L052542
The transactional data in my cube is as follows -
0MATERIAL 0CUSTOMER 0DIVISION 0SALES_OFF 0NET_SALES 0MATERIAL_DIVISION (Material master div)
102152 100000 01 L002 110000 02
102545 100001 02 L002 12000000 02
The above data is all transactional except for 0MATERIAL_DIVISION which comes from material division.
My query is built in such a way that the division comes from 0MATERIAL (restricted to variable in the selection screen). ZDIV_CUS and its navigational attributes are being used in my query. The initial display is shown as per ZDIV_CUS_0SALES_OFF (i.e. sales office from ZDIV_CUS)
The problem is that say for eg for the above mentioned scenario, the user executes the query for division 01 and the below mentioned output comes up -
0MATERIAL_DIVISION ZDIV_CUS 0MATERIAL ZDIV_CUS_0SALES_OFF 0NET_SALES
02 100000 102152 L001 110000
If we take a closer look, we would know that ZDIV_CUS_0SALES_OFF should pick up value L005 and not L001 since the division for which the query is run is 0MATERIAL_DIVISION and not 0DIVISION.
Request help to resolve the abovementioned issue.
Thank you,
SreeCan any one answer for Issue-2. Even I am also facing the same.
Thanks in Advance. -
Issue in query created on infoset - characterstic values are not displayed
Hi,
We have created a query based on an infoset(customized). In this query, values of only one object(sold-to-party) is not displayed. Where as for the same object, values are displayed in the other query created based on the respective ODS.
note : Query based on infoset is alone not coming.
This is the description of the error -
System error in program CL_RSMD_RS and IF_RSMD_RS-READ_META_DATA-02. and it is showing (No entries found).
And apart from that, the values of their respective attributes are also not displayed in the report.
I even verified the object in RSA1 where data is available for that.
Need Help to solve ....
Regards,
Chandru..Can any one answer for Issue-2. Even I am also facing the same.
Thanks in Advance. -
Joins issue in query - OBIEE 11g
Hi all,
I have created a new repository with a simple star schema, and also have created ragged hierarchy. Now when i select any dimension field and a measure from the fact, the query generated by BI has 2 physical queries i.e. one for dimension and other for fact. but when it joins these 2 subqueries it does not consider any common column to join on , and results in incorrect data.
Not able to figure out what is going wrong. Pls. let me know hw to resolve this issue if any one has faced a similar one.
rgds,
ShrutiRqList <<184070>> [for database 3023:147018:DEVFRC,57] /* FETCH FIRST 1000001 ROWS ONLY */
0 as c1 [for database 3023:147018,57],
D2.c1 as c2 [for database 3023:147018,57],
D1.c1 as c3 [for database 3023:147018,57]
Child Nodes (RqJoinSpec): <<184109>> [for database 3023:147018:DEVFRC,57]
RqJoinNode <<184107>> []
RqList <<184082>> [for database 3023:147018:DEVFRC,57] distinct
sum(FCT_LEDGER_STAT.N_VALUE) as c1 [for database 3023:147018,57]
Child Nodes (RqJoinSpec): <<184085>> [for database 3023:147018:DEVFRC,57]
RqJoinNode <<184084>> []
FCT_LEDGER_STAT T147426
) as D1
RqJoinNode <<184108>> []
RqList <<184088>> [for database 3023:147018:DEVFRC,57] distinct
DIM_FINANCIAL_ELEMENT.V_FINANCIAL_ELEM_NAME as c1 [for database 3023:147018,57]
Child Nodes (RqJoinSpec): <<184099>> [for database 3023:147018:DEVFRC,57]
RqJoinNode <<184098>> []
DIM_FINANCIAL_ELEMENT T147109
) as D2
OrderBy: c2 asc NULLS LAST [for database 3023:147018,57]
=========================================================
PHYSICAL
==================================================
WITH
SAWITH0 AS (select sum(T147426.N_VALUE) as c1
from
OFSAAATOMIC.FCT_LEDGER_STAT T147426),
SAWITH1 AS (select distinct T147109.V_FINANCIAL_ELEM_NAME as c1
from
OFSAAATOMIC.DIM_FINANCIAL_ELEMENT T147109)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3 from ( select 0 as c1,
D2.c1 as c2,
D1.c1 as c3
from
SAWITH0 D1,
SAWITH1 D2
order by c2 ) D1 where rownum <= 1000001 -
Issues with Query Caching in MII
Hi All,
I am facing a strange problem with Query caching in MII query. Have created one xacute query and set cache duration 30 sec. The associated BLS with the query retrieves data from SAP system. In the web page this value is populated by executing an iCommand. Followings are the steps I followed -
Query executed for first time, it retrives data from SAP correctly. Lets say value is val1
At 10th sec Value in SAP changed to val2 from val1.
Query excuted at 15th sec, it gives value as val1. Which is expected as it gives from cache.
Query is executed 1t 35th sec, it gives value as val2 retriving from SAP system. Which is correct.
Query executed at 40th sec, it gives value as val1 from cache. Which is not expected.
I have tried with java cache clear in browser and JCo cache of the server.
Same problem I have seen for tag query also..
MII Version - 12.0.6 Build(12)
Any thoughts on this.
Thanks,
SoumenSoumen Mondal,
If you are facing this problem from the same client PC and in the same session, it's very strange.. but if there are two sessions running on the same PC this kind of issue may come..
Something about caching:
To decide whether to cache a query or not, consider the number of times you intend to run the query in a minute. For data changing in seconds, and queried in minutes, I would recommend not to cache query at all.
I may give a typical example for query caching in a query on material master cached for 24 hours, where we know that after creating material master, we are not creating a PO on the same day.
BR,
SB
Maybe you are looking for
-
Linux on Oracle: Installation Problem..Odd!!
I am trying to install oracle 8.0.5 DB server on my RedHat5.2 box with all the necessary rpm install like gcc, cc,lib etc. I am running the orainst from the har disk, I answer all questions and then I see the graph bar appearing and oracle getting in
-
Audio file "Male Narrator Noisy#05.aif" not found.
I just spent five hours editing a particularly annoying recording for a client. The project was saved and I was just about to export it when GB crashed. When I re-opened the file, I get this message: "Audio file "Male Narrator Noisy#05.aif" not found
-
Unable to exchange PIP 3A4 request version 11.14.00 between Host and TP.
Hi All, We are trying to exchange PIP 3A4 request of version 11.14.00 from the host to trading partner, but the message is failing at the TP end with the below exception; <11-Sep-2012 17:57:31 o'clock IST> <Error> <oracle.soa.b2b.engine> <BEA-000000>
-
LDAP Configuration in weblogic server
Hi, This is chirumalla, I am working on the task for configuring the LDAP on weblogic 9.2 MP2. Could anybody help me on how to start on this task. Thanks in advance.
-
Error 1311 When Upgrading Acrobat X Pro
I have Acrobat X Pro and have tried upgrading but keep receiving the Error 1311. How do I fix this issue since I am upgrading and not doing a clean new install? Steve