Date Filter issues in MII Messaging
Hi all,
We have an issue on a newly commissioned MII server (12.1.5 build 99; Windows Server 2008 R2, SQL Server 2008) whereby date filters (From Date; To Date) are set to the incorrect format in the MII Message Monitor query form. The filters default to M D Y format and we require D M Y format. Date is stored correctly in the db table.
Where is this configuration set?
In addition to this I note that there numerous warning entries in the MII log of the type '** MISSING RESOURCE: Unable to localize ,< object name>'* where object name seems to be referring to a number of date/time type ojects (eg XCOL_MON, XSLT_TODAY,XLST_DECEMBER etc etc). This seems related to the first problem, if only in problem domain. How do we address this?
Thanks Mike,
I have been through almost the entire Netweaver Admin cockpit, including the UME, and again with an experienced NW admin but cannot locate where these defaults are set.
Can you point me to where this is configured, please?
Mark
Similar Messages
-
Hi,
We have created a document from an Essbase ASO cube.
A data filter have been created in Essbase ASO Cube. The data filter is 'all Europe Region'. When this data filter is applied to the user and user logs in WA and accesses the document, the below error is produced.
========================================================
Loading this document produced errors.
Unknown members or dimensions have been removed from the document in an attempt to fix the problem.
Saving this document now will prevent these errors when the document is loaded again.
[1033] Native: 1001005 [Fri Mar 06 16:59:20 2009]uxsy9dbt.cos.agilent.com/vip_main/vip_main//Error(1001005) Unknown Member [Americas] in Report
==============================================
With no data filter assigned to the user, the report is fine...
Tested the same data filter in Excel-addin and there are no issues/error. Is there any setting that needs to be changed in WebAnalysis?Following up even thought his is an old issue. Out of the blue my users began complaining their security in Web Analysis had gone awry. Their security in Excel add-in was fine though.
Most users use a WA Shared Preferences File here. However one user, unbeknownst to me, was able to hard code their userID/password in the "database" connection for the Shared Preference. So anytime another user logged in to Web Analysis, if you open the View Pane to see all the details, the Database user was John Doe - which was wrong, but the WebAnalysis user was John Smith - which was correct. The John Doe user like I said somehow managed to "save" his userID/pwd as the default, which caused all other users to "pass through" to Essbase as John Doe instead of as themselves, which means they were getting HIS filter not their own.
I wasn't able to figure out how he did that but to fix it I deleted his profile then Edited the Shared Preferences File | Databases tab | highlight database in question and choose Edit | change "Default logon" back to "Use User's ID and Password" instead of "Enter User ID and Password...John Doe|Pwd".
It works fine now. Users get their same, correct security filters applied whether they go in to Web Analysis or Excel add-in. (In Web Analysis we have it set up to open a report at a certain location depending on their security filters, so this was causing us quite a headache until we figured out what the problem was).
I opened a case with Tech Support to try and shed light on how the user was able to hard-code their login credentials in to the Shared Preference. Everyone only has Read access to that file, so. ?
HTH,
Karen S. -
Hi all,
I want to filter my report with two date , one is coming from dashboard prompt as presentation variable and other i'm subtracting 7 days from the input date .
Here i'm using filter in advance sql like
"Alerts View".STATUS_DT <= '@{SDate}'
"Alerts View".STATUS_DT >= timestampadd(sql_tsi_day ,-7 ,Date'@{SDate}')
i'm getting date format error , below is the log
Error Codes: YQCO4T56:OPR4ONWY:U9IM8TAC:OI2DL65P
Odbc driver returned an error (SQLExecDirectW).
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 46046] Datetime value 5/26/2009 does not match the specified format. (HY000)
SQL Issued: {call NQSGetLevelDrillability('SELECT "Alerts View".REVIEW_ID saw_0, "Alerts View".LAST_ACTVY_TYPE_CD saw_1, "Alerts View".STATUS_DT saw_2, "Alerts View".STATUS_DT saw_3, case when 1=0 then "Alerts View".STATUS_CD else ''Week'' end saw_4 FROM "Common Productivity Reports for Alerts" WHERE ("Alerts View".STATUS_DT <= timestampadd(sql_tsi_day ,0 ,Date''5/26/2009'')) AND ("Alerts View".STATUS_DT >= timestampadd(sql_tsi_day ,-7 ,Date''5/26/2009'')) AND (case when 1=0 then "Alerts View".STATUS_CD else ''Week'' end = ''Week'')')}Please help me if anyone knows.my bad luck ........ still it did not work . but in that link Goran has written the below msg
Let's change now dateSeparator and dateOrder entries in localedefiniitions.xml to see what will happen. I want to see date in my user friendly format dd.mm.yyyy, with or without leading zeros (01.01.2008 or 1.1.2008), so I changed dateOrder to dmy and dateSeparator to zero (.).
so I changed dateOrder to dmy and dateSeparator to zero (.).
what is the meaning of --> dateSeparator to zero (.).
exactly what to put in dateSeparator tag.
he has clearly written for leading zero , we have to change dateSeparator. -
How to solve this Data Configuration issue with error message ORA: 01017
Bad public user name or password. ORA-01017: invalid username/password; logon denied
How to solve this?I'm using JCreator jdk1.5.0_7.But i don't know how
to use command prompt. When i execute my program, the
command prompt showed
Exception in thread "main"
java.lang.NoClassDefFoundError: StringManipulation
Press any key to continue...So you managed to compile your code since now you are running it. This error means that the class StringManipulation was not found. Usually an indicator that your classpath is incorrect.
What did you do to generate this error? That is, how did you execute your program?
[edit] if you did not compile your code then there is no class file StringManipulation.class and this error will appear. -
IDoc Data Lost between ECC and MII Message Listener.
We have recently experienced some network issues which caused the IDoc's to stop flowing outbound from ECC to MII Message Listener. This happened 3 times during a network switch reconfiguration. 2 of the three occasions the MII Message listener had to be restarted on the MII server in order to get data to flow again. Interestingly some of the IDocs that were generated during this outage time were processed by the message listener and some didn't.
We are running MII 12.0 sp11 and ECC 6.0. The ECC server and MII server are located in different geographic locations.
When we look at the ECC system in WE05 we see only sucessful status for the IDoc messages saying the Data passed to port OK.
When we check the Message listener on the MII end there it only shows processed successfully. There are no failed messages or messages without rules.
Where can I check further to see the IDocs that really didn't make it to be processed by the Message Listener on the MII server?
Also is it possible that these IDocs that got lost somewhere are still sitting on the ECC or MII servers somewhere where we can re-process them?
And the Last question is, Why didn't the Message Listener handle the network issue automatically instead of needing to restart it?Hi Robert,
Did SAP ever respond/resolve the ticket you created for this issue? Someone told me that this was a known issue, but I have yet to have it verified by SAP. They asked me to simulate the issue - but our non-prod systems are on a VM server with other systems. So I can't exactly start pulling network cables out - I did drop it from the network a few time while sending IDOCs down - but this failed to hang the processing of the IDOCs.
We're hopefully going to MII 14/15 next year - but the business is keen to resolve this issue as it has impacted production a few times. The MII sites are in some remote regions and I don't think we can really improve the WAN links much further.
@Christian - thanks for the link, we don't really have an issue with IDOC's not being received, its just the message processing just hangs and we either need to restarted the services or sometimes the system. -
having issues with MMS messaging instruction for setup, are to gotto settings, then general, then network then cellular data network.My problem is there is no cellular data network optiion on this page have tried resetting phone , no good .Help.
That option does not exist depending on who the carrier is.
Who is the carrier? Are they a supported carrier?
Have you tried Settings > General > Reset > Reset Network Settings? -
Performance issue on Date filter
In my where condition I wanted a date filter on sale date. I want all sales after 9/1/2014.
CASE1
If I explicitly use date filter like
SaleDate > '2014-09-01 00:00:00:000' I am getting the result in 2 seconds.
CASE2
Since I may need to use this data value again, I created a date table with single column "date" and loaded the value '2014-09-01 00:00:00:000' .
So now my where is like
SaleDate > (Select date from dateTable)
When I run this , the result does not show up even after 10 mins. Both date types are datetime. I am baffled. Why is this query not coming up with the result?As mentioned by Erland, for the optimizer, both situation are very different. With a literal, the optimizer can properly estimate the number of qualifying rows and adapt the query plan appropriately. With a scalar subquery, the value is unknown at compile
time, and the optimizer will use heuristics to accommodate any value. In this case, the selection for all rows more recent than September 1st 2014 is probably a small percentage of the table.
I can't explain why the optimizer or engine goes awry, because the subquery's result is a scalar, and shouldn't result in such long runtime. If you are unlucky, the optimizer expanded the query and actually joins the two tables. That would make the indexes
on table dateTable relevant, as well as distribution and cardinality of dateTable's row values. If you want to know, you would have to inspect the (actual) query plan.
In general, I don't think your approach is a smart thing to do. I don't know why you want to have your selection date in a table (as opposed to a parameter of a stored procedure), but if you want to stick to it, maybe you should break the query up into something
like this. The optimizer would still have to use heuristics (instead of more reliable estimates), but some unintended consequences could disappear.
Declare @min_date datetime
Set @min_date = (SELECT date FROM dateTable)
SELECT ...
FROM ...
WHERE SaleDate > @min_date
If you use a parameter (or an appropriate query hint), you will probably get performance close to your first case.
Gert-Jan -
Hi All
I Just upgraded my catalog and repository from 10g to 11g
Many of my answers where using a date filter CURRENT_DATE -1
In 11g this doesn't appear to work.
Formula syntax is invalid.
[nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 22023] An arithmetic operation is being carried out on a non-numeric type. (HY000)
SQL Issued: SELECT CURRENT_DATE-1 FROM "LDW"
OK (Ignore Error)
If i just use CURRENT_DATE it works fine.
Any idea why? Is there another way to get this in 11g?
ThanksHi Experts,
i think this issue not at solved because we are in OBIEE 11.1.1.6.6 we are facing same issue CURRENT_DATE-1 This is a bug or what?.
10g why its not saying non-numeric type (10g working fine ) CURRENT_DATE-1.
any help appreciated...
Regrds
Raju. -
Hi,
I create a report (columnar) with date filter, but when i try use the filter this error ocurrs:
ADC Server exception in OpenViewset().
ADC Server exception in OpenViewset(). [ErrorSource="ActiveDataCache", ErrorID="ADCServerException"] ERROR [42000] [Microsoft][ODBC driver for Oracle][Oracle]ORA-00911: invalid character [ErrorSource="msorcl32.dll"]
The filter is "FIELD_DT is greater than 10/1/2009 12:00:00 AM".
The DO is vinculated a External Data Source, and the field used is datetime type.
I use BAM 10.1.3.3.0.
Please, help-me.Hi,
It is difficult to say from error message whether the problem is due to filter or some other condition in the report. If you suspect it is filter, can you please create a simple Updating Ordered List on the dataobject you are using to see if works and when you find it working, add the exact filter on it and then we would know if filter has issues?
Thanks,
Vishal -
BI Composer - Date filter is not working
Hi,
Bi Composer -
=========
1)
Date filter selection is not working in BI Composer? is anybody faced such issue?
error message: "java.lang.nullpointerexception"
2) BI Composer not working with https link (SSL)
error message "Invalid BIPS Connection"
I am using obiee11.1.1.6.0, IE8 ,windows
Thanks
Deva
Edited by: Devarasu on Aug 31, 2012 1:23 AMHi,
1) Raised SR with MOS team and created bug it's fixed feature 11.1.1.7 version
Bug 14635134 - BI COMPOSER - CAN'T BUILD FILTERS ON DATE COLUMNS
From bug 14205672 it looks like this may already be resolved in 11.1.1.7.
2) for error "Invalid BIPS Connection" we need to configure composer via FWM - EM then save it and then commit that changes. after tested it's working fine.
SR reference
SR 3-6209577211 : BI Composer - can't build filters on date columns
Thanks
Deva
Edited by: Devarasu on Oct 22, 2012 11:22 AM
Edited by: Devarasu on Oct 22, 2012 11:27 AM -
Can someone explain why 'Y' = 'N' is not working with PARALLEL Plan? i.e. With the filter like 'Y' = 'N' specified and if PQ is used , it does not return instantly. In fact it reads the entire table.
Here is the test case.. Goal is to execute only one of the SQL joined by union all. I have included 'Y' = 'N' in both SQLs for the test purpose.
DB Version is 10.2.0.4
Create table test_tbl_01 nologging as select do.* from dba_objects do , dba_objects d1 where rownum < 22000001;
Create table test_tbl_02 nologging as select do.* from dba_objects do , dba_objects d1 where rownum < 22000001;
execute DBMS_STATS.GATHER_TABLE_STATS('SCOTT', 'TEST_TBL_01');
execute DBMS_STATS.GATHER_TABLE_STATS('SCOTT', 'TEST_TBL_02');
*Serial path with 2 table join*
SQL> select
2 /* parallel(t1,2 ) parallel(t2,2) */
3 t1.*
4 from test_tbl_01 t1 ,test_tbl_02 t2
5 where t1.object_name = t2.object_name
6 and 'Y' = 'N'
7 and t1.object_type = 'TABLE'
8 union all
9 select
10 /* parallel(t1,2 ) parallel(t2,2) */
11 t1.*
12 from test_tbl_01 t1 ,test_tbl_02 t2
13 where t1.object_name = t2.object_name
14 and 'Y' = 'N'
15 /
no rows selected
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 3500703583
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 168 | | 0 (0)| |
| 1 | UNION-ALL | | | | | | |
|* 2 | FILTER | | | | | | |
|* 3 | HASH JOIN | | 660G| 50T| 449M| 6242K (99)| 24:16:38 |
|* 4 | TABLE ACCESS FULL| TEST_TBL_01 | 5477K| 386M| | 41261 (2)| 00:09:38 |
| 5 | TABLE ACCESS FULL| TEST_TBL_02 | 22M| 212M| | 40933 (2)| 00:09:34 |
|* 6 | FILTER | | | | | | |
|* 7 | HASH JOIN | | 2640G| 201T| 467M| 24M(100)| 95:54:53 |
| 8 | TABLE ACCESS FULL| TEST_TBL_02 | 22M| 212M| | 40933 (2)| 00:09:34 |
| 9 | TABLE ACCESS FULL| TEST_TBL_01 | 21M| 1546M| | 41373 (3)| 00:09:40 |
Predicate Information (identified by operation id):
2 - filter(NULL IS NOT NULL)
3 - access("T1"."OBJECT_NAME"="T2"."OBJECT_NAME")
4 - filter("T1"."OBJECT_TYPE"='TABLE')
6 - filter(NULL IS NOT NULL)
7 - access("T1"."OBJECT_NAME"="T2"."OBJECT_NAME")
Statistics
1 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
567 bytes sent via SQL*Net to client
232 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
*Parallel path with 2 table join*
SQL> select
2 /*+ parallel(t1,2 ) parallel(t2,2) */
3 t1.*
4 from test_tbl_01 t1 ,test_tbl_02 t2
5 where t1.object_name = t2.object_name
6 and 'Y' = 'N'
7 and t1.object_type = 'TABLE'
8 union all
9 select
10 /*+ parallel(t1,2 ) parallel(t2,2) */
11 t1.*
12 from test_tbl_01 t1 ,test_tbl_02 t2
13 where t1.object_name = t2.object_name
14 and 'Y' = 'N'
15 /
no rows selected
Elapsed: 00:01:09.34
Execution Plan
Plan hash value: 1557722279
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 2 | 168 | | 0 (0)| | | | |
| 1 | PX COORDINATOR | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10004 | | | | | | Q1,04 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | 2 | 168 | | | | Q1,04 | PCWP | |
| 4 | UNION-ALL | | | | | | | Q1,04 | PCWP | |
|* 5 | FILTER | | | | | | | Q1,04 | PCWC | |
|* 6 | HASH JOIN | | 660G| 50T| 224M| 3465K (99)| 13:28:42 | Q1,04 | PCWP | |
| 7 | PX JOIN FILTER CREATE| :BF0000 | 5477K| 386M| | 22861 (2)| 00:05:21 | Q1,04 | PCWP | |
| 8 | PX RECEIVE | | 5477K| 386M| | 22861 (2)| 00:05:21 | Q1,04 | PCWP | |
| 9 | PX SEND HASH | :TQ10000 | 5477K| 386M| | 22861 (2)| 00:05:21 | Q1,00 | P->P | HASH |
| 10 | PX BLOCK ITERATOR | | 5477K| 386M| | 22861 (2)| 00:05:21 | Q1,00 | PCWC | |
|* 11 | TABLE ACCESS FULL| TEST_TBL_01 | 5477K| 386M| | 22861 (2)| 00:05:21 | Q1,00 | PCWP | |
| 12 | PX RECEIVE | | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,04 | PCWP | |
| 13 | PX SEND HASH | :TQ10001 | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,01 | P->P | HASH |
| 14 | PX JOIN FILTER USE | :BF0000 | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,01 | PCWP | |
| 15 | PX BLOCK ITERATOR | | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,01 | PCWC | |
| 16 | TABLE ACCESS FULL| TEST_TBL_02 | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,01 | PCWP | |
|* 17 | FILTER | | | | | | | Q1,04 | PCWC | |
|* 18 | HASH JOIN | | 2640G| 201T| 233M| 13M(100)| 53:15:52 | Q1,04 | PCWP | |
| 19 | PX RECEIVE | | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,04 | PCWP | |
| 20 | PX SEND HASH | :TQ10002 | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,02 | P->P | HASH |
| 21 | PX BLOCK ITERATOR | | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,02 | PCWC | |
| 22 | TABLE ACCESS FULL | TEST_TBL_02 | 22M| 212M| | 22679 (1)| 00:05:18 | Q1,02 | PCWP | |
| 23 | PX RECEIVE | | 21M| 1546M| | 22924 (2)| 00:05:21 | Q1,04 | PCWP | |
| 24 | PX SEND HASH | :TQ10003 | 21M| 1546M| | 22924 (2)| 00:05:21 | Q1,03 | P->P | HASH |
| 25 | PX BLOCK ITERATOR | | 21M| 1546M| | 22924 (2)| 00:05:21 | Q1,03 | PCWC | |
| 26 | TABLE ACCESS FULL | TEST_TBL_01 | 21M| 1546M| | 22924 (2)| 00:05:21 | Q1,03 | PCWP | |
Predicate Information (identified by operation id):
5 - filter(NULL IS NOT NULL)
6 - access("T1"."OBJECT_NAME"="T2"."OBJECT_NAME")
11 - filter("T1"."OBJECT_TYPE"='TABLE')
17 - filter(NULL IS NOT NULL)
18 - access("T1"."OBJECT_NAME"="T2"."OBJECT_NAME")
Statistics
1617 recursive calls
3 db block gets
488929 consistent gets
493407 physical reads
636 redo size
567 bytes sent via SQL*Net to client
232 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
0 rows processedHowever single table with UNION ALL and PQ works..
*NO Joins (i.e. Single Table with PQ ) , Issue does not show-up.*
_*SERIAL PLAN with one Table*_
SQL> select
2 /* parallel(t1,2 ) */
3 t1.*
4 from test_tbl_01 t1
5 where 'Y' = 'N'
6 and t1.object_type = 'TABLE'
7 union all
8 select
9 /* parallel(t1,2 ) */
10 t1.*
11 from test_tbl_01 t1
12 where 'Y' = 'N'
13 /
no rows selected
Elapsed: 00:00:00.01
Execution Plan
Plan hash value: 2870519681
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 148 | 0 (0)| |
| 1 | UNION-ALL | | | | | |
|* 2 | FILTER | | | | | |
|* 3 | TABLE ACCESS FULL| TEST_TBL_01 | 5477K| 386M| 41261 (2)| 00:09:38 |
|* 4 | FILTER | | | | | |
| 5 | TABLE ACCESS FULL| TEST_TBL_01 | 21M| 1546M| 41373 (3)| 00:09:40 |
Predicate Information (identified by operation id):
2 - filter(NULL IS NOT NULL)
3 - filter("T1"."OBJECT_TYPE"='TABLE')
4 - filter(NULL IS NOT NULL)
Statistics
0 recursive calls
0 db block gets
0 consistent gets
0 physical reads
0 redo size
567 bytes sent via SQL*Net to client
232 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
_*PARALLEL PLAN with one Table*_
SQL> select
2 /*+ parallel(t1,2 ) */
3 t1.*
4 from test_tbl_01 t1
5 where 'Y' = 'N'
6 and t1.object_type = 'TABLE'
7 union all
8 select
9 /*+ parallel(t1,2 ) */
10 t1.*
11 from test_tbl_01 t1
12 where 'Y' = 'N'
13 /
no rows selected
Elapsed: 00:00:00.09
Execution Plan
Plan hash value: 3114025180
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | 2 | 148 | 0 (0)| | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | | | | | Q1,00 | P->S | QC (RAND) |
| 3 | UNION-ALL | | | | | | Q1,00 | PCWP | |
|* 4 | FILTER | | | | | | Q1,00 | PCWC | |
| 5 | PX BLOCK ITERATOR | | 5477K| 386M| 22861 (2)| 00:05:21 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| TEST_TBL_01 | 5477K| 386M| 22861 (2)| 00:05:21 | Q1,00 | PCWP | |
|* 7 | FILTER | | | | | | Q1,00 | PCWC | |
| 8 | PX BLOCK ITERATOR | | 21M| 1546M| 22924 (2)| 00:05:21 | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL| TEST_TBL_01 | 21M| 1546M| 22924 (2)| 00:05:21 | Q1,00 | PCWP | |
Predicate Information (identified by operation id):
4 - filter(NULL IS NOT NULL)
6 - filter("T1"."OBJECT_TYPE"='TABLE')
7 - filter(NULL IS NOT NULL)
Statistics
28 recursive calls
3 db block gets
7 consistent gets
0 physical reads
628 redo size
567 bytes sent via SQL*Net to client
232 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
0 rows processedThe same behvious appears in 11.1.0.6, and you don't need such a large data set to prove the point. The paralllel distribution may change to a broadcast with a smaller data set, but I demonstrated the effect when my two tables simply selected 30,000 rows each from all_objects.
I think you should pass this to Oracle Corp. as a bug - using the smaller data set.
The problem seems to be the way that Oracle combines multiple lines of a plan into groups of operations (as in PCWC, PCWC, PCWP). It looks like this particularly example has managed to fold the FILTER into a group in such a way that Oracle has lost track of the fact that it is a 'pre-emptng - i.e. always false' filter rather than an ordinary data filter; consequently the filter doesn't apply until after the hash join starts running.
In my example (which did a broadcast distribution) I could see that Oracle read the entire first table, then started to read the second table, but stopped after one row of the second table, because my plan allowed the join and filter to be applied immediately after the first row from the second table. And I think Oracle decided that the filter was alway going to be false at that moment - so stopped running the second tablescan. You've used a hash/hash distribriution, which has resulted in both scans completing because the slaves in each layer don't talk to each other.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
hi Guru's,
I am using 11.1.6.8 OBIEE. One of my report is having performance issue when i dig into that i found that the date filter not applied in the SQL generated to send DB, due to that it is doing table scan, But strange thing is that when it displaying data based on the Date range filter. It is only happening with the date dimension all other dimensions are working fine. I am not sure what it is missing.
Thanks In advance.
regards
Mohammed.hi Saichand,
Thanks for taking time and looking into.
The filter is applied on the logical query but the physical query send to the DB is not having the filter. Due to that it is doing full table scan of the fact table and almost 30 minutes to display data. I am not sure why the physical query is not having the date filter. when i add the location or other type of filter it added to the Physical Query send to DB.
regards
@li -
How to create a data filter for BAPI used in a Distribution Model ?
I am trying to create IDOC's for a QM inspection plan via message type QPMK in ECC 6.0. When creating the new message type in the Distribution Model, it forced me to use a BAPI (MasterInspectionChar) rather that just the message as I have done in the past. In order to ensure that only IDOC's related to a specific plant and material type get sent out, I was trying to create a filter.
I am able to get into a 'Change Filter' dialog by double clicking 'No data filter set' under the node 'MasterInspectionChar.SaveReplica'. However, an attempt to highlight anything on this dialog and clicking button 'Create filter group' results in msg. 'Select node: Content-dependent parameter filtering'.
Online help led me in a different direction, one thread led me to believe I needed to create a table entry for the filter in table TBD16 via BD81 for later attachment to the BAPI in the Distribution Model. Before this, I created two Z* ALE Object Type records for plant and material type in table TBD11 via BD95 for what I thought would provide a lookup back in BD81 for creation of a new filter record, but these objects are not found. Per the help, I am also unclear as to whether I would need to set up Receiver determination or Parameter filtering, nor where to make this step in the configuration happen?
Can anyone point me back in the correct direction to get a data filter set up for this BAPI within the Distribution Model ?
Thanks - MikeJust to clarify, what I am having trouble with is finding where exactly to plug in the name of the plant to be filtered? If using a message type such as MATMAS in a distribution model, it is very straight forward to navigate down from the 'Data filter active', and assign a value to whatever field you wish to filter via the 'List of Values'. But in this example, message type QPMK required use of the BAPI MasterInspectionChar, and therefore the different method in creating the filter.
I cannot find similar functionality when navigating the BAPI related row, for MasterInspectionChar. The 'Change Filter' dialog popup displays a row for 'Container' and then 3 more rows containing 'BAPI structure...'. Clicking the 'Attribute' checkbox on any of these rows sets 'Data filter active' as a node under the BAPI. Double clicking the 'Data filter active' brings me back into the same dialog, and selecting any row with an attempt at this point to create a filter results in the 'Select node: content-dependent parameter filtering' warning. Is this warning occuring because of missing configuration?
Thanks - Mike -
Hi,
Im very new to streams and having a doubt regarding ORA-01403 issue happening while replication. Need you kind help on this regard. Thanks in advance.
Oracle version : 10.0.3.0
1.Suppose there are 10 LCRs in a Txn and one of the LCR caused ORA-01403 and none of the LCRs get executed.
We can read the data of this LCR and manually update the record in the Destination database.
Eventhough this is done, while re-executing the transaction, im getting the same ORA-01403 on the same LCR.
What could be the possible reason.
Since, this is a large scale system with thousands of transactions, it is not possible to handle the No data found issues occuring in the system.
I have written a PL/SQL block which can generate Update statements with the old data available in LCR, so that i can re-execute the Transaction again.
The PL/SQL block is given below. Could you please check if there are any issues in this while generating the UPDATE statements. Thank you
/* Formatted on 2008/10/23 14:46 (Formatter Plus v4.8.7) */
--Script for generating the Update scripts for the Message which caused the 'NO DATA FOUND' error.
DECLARE
RES NUMBER; --No:of errors to be resolved
RET NUMBER; --A number variable to hold the return value from getObject
I NUMBER; --Index for the loop
J NUMBER; --Index for the loop
K NUMBER; --Index for the loop
PK_COUNT NUMBER; --To Hold the no:of PK columns for a Table
LCR ANYDATA; --To Hold the Logical Change Record
TYP VARCHAR2 (61); --To Hold the Type of a Column
ROWLCR SYS.LCR$_ROW_RECORD; --To Hold the LCR caused the error in a Txn.
OLDLIST SYS.LCR$_ROW_LIST; --To Hold the Old data of the Record which was tried to Update/Delete
NEWLIST SYS.LCR$_ROW_LIST;
UPD_QRY VARCHAR2 (5000);
EQUALS VARCHAR2 (5) := ' = ';
DATA1 VARCHAR2 (2000);
NUM1 NUMBER;
DATE1 TIMESTAMP ( 0 );
TIMESTAMP1 TIMESTAMP ( 3 );
ISCOMMA BOOLEAN;
TYPE TAB_LCR IS TABLE OF ANYDATA
INDEX BY BINARY_INTEGER;
TYPE PK_COLS IS TABLE OF VARCHAR2 (50)
INDEX BY BINARY_INTEGER;
LCR_TABLE TAB_LCR;
PK_TABLE PK_COLS;
BEGIN
I := 1;
SELECT COUNT ( 1)
INTO RES
FROM DBA_APPLY_ERROR;
FOR TXN_ID IN
(SELECT MESSAGE_NUMBER,
LOCAL_TRANSACTION_ID
FROM DBA_APPLY_ERROR
WHERE LOCAL_TRANSACTION_ID =
'2.85.42516'
ORDER BY ERROR_CREATION_TIME)
LOOP
SELECT DBMS_APPLY_ADM.GET_ERROR_MESSAGE
(TXN_ID.MESSAGE_NUMBER,
TXN_ID.LOCAL_TRANSACTION_ID
INTO LCR
FROM DUAL;
LCR_TABLE (I) := LCR;
I := I + 1;
END LOOP;
I := 0;
K := 0;
dbms_output.put_line('size >'||lcr_table.count);
FOR K IN 1 .. RES
LOOP
ROWLCR := NULL;
RET :=
LCR_TABLE (K).GETOBJECT
(ROWLCR);
--dbms_output.put_line(rowlcr.GET_OBJECT_NAME);
PK_COUNT := 0;
--Finding the PK columns of the Table
SELECT COUNT ( 1)
INTO PK_COUNT
FROM ALL_CONS_COLUMNS COL,
ALL_CONSTRAINTS CON
WHERE COL.TABLE_NAME =
CON.TABLE_NAME
AND COL.CONSTRAINT_NAME =
CON.CONSTRAINT_NAME
AND CON.CONSTRAINT_TYPE = 'P'
AND CON.TABLE_NAME =
ROWLCR.GET_OBJECT_NAME;
dbms_output.put_line('Count of PK Columns >'||pk_count);
DEL_QRY := NULL;
DEL_QRY :=
'DELETE FROM '
|| ROWLCR.GET_OBJECT_NAME
|| ' WHERE ';
INS_QRY := NULL;
INS_QRY :=
'INSERT INTO '
|| ROWLCR.GET_OBJECT_NAME
|| ' ( ';
UPD_QRY := NULL;
UPD_QRY :=
'UPDATE '
|| ROWLCR.GET_OBJECT_NAME
|| ' SET ';
OLDLIST :=
ROWLCR.GET_VALUES ('old');
-- Generate Update Query
NEWLIST :=
ROWLCR.GET_VALUES ('old');
ISCOMMA := FALSE;
FOR J IN 1 .. NEWLIST.COUNT
LOOP
IF NEWLIST (J) IS NOT NULL
THEN
IF J <
NEWLIST.COUNT
THEN
IF ISCOMMA =
TRUE
THEN
UPD_QRY :=
UPD_QRY
|| ',';
END IF;
END IF;
ISCOMMA := FALSE;
TYP :=
NEWLIST
(J).DATA.GETTYPENAME;
IF (TYP =
'SYS.VARCHAR2'
THEN
RET :=
NEWLIST
(J
).DATA.GETVARCHAR2
(DATA1
IF DATA1 IS NOT NULL
THEN
UPD_QRY :=
UPD_QRY
|| NEWLIST
(J
).COLUMN_NAME;
UPD_QRY :=
UPD_QRY
|| EQUALS;
UPD_QRY :=
UPD_QRY
|| ' '
|| ''''
|| SUBSTR
(DATA1,
0,
253
|| '''';
ISCOMMA :=
TRUE;
END IF;
ELSIF (TYP =
'SYS.NUMBER'
THEN
RET :=
NEWLIST
(J
).DATA.GETNUMBER
(NUM1
IF NUM1 IS NOT NULL
THEN
UPD_QRY :=
UPD_QRY
|| NEWLIST
(J
).COLUMN_NAME;
UPD_QRY :=
UPD_QRY
|| EQUALS;
UPD_QRY :=
UPD_QRY
|| ' '
|| NUM1;
ISCOMMA :=
TRUE;
END IF;
ELSIF (TYP =
'SYS.DATE'
THEN
RET :=
NEWLIST
(J
).DATA.GETDATE
(DATE1
IF DATE1 IS NOT NULL
THEN
UPD_QRY :=
UPD_QRY
|| NEWLIST
(J
).COLUMN_NAME;
UPD_QRY :=
UPD_QRY
|| EQUALS;
UPD_QRY :=
UPD_QRY
|| ' '
|| 'TO_Date( '
|| ''''
|| DATE1
|| ''''
|| ', '''
|| 'DD/MON/YYYY HH:MI:SS AM'')';
ISCOMMA :=
TRUE;
END IF;
ELSIF (TYP =
'SYS.TIMESTAMP'
THEN
RET :=
NEWLIST
(J
).DATA.GETTIMESTAMP
(TIMESTAMP1
IF TIMESTAMP1 IS NOT NULL
THEN
UPD_QRY :=
UPD_QRY
|| ' '
|| ''''
|| TIMESTAMP1
|| '''';
ISCOMMA :=
TRUE;
END IF;
END IF;
END IF;
END LOOP;
--Setting the where Condition
UPD_QRY := UPD_QRY || ' WHERE ';
FOR I IN 1 .. PK_COUNT
LOOP
SELECT COLUMN_NAME
INTO PK_TABLE (I)
FROM ALL_CONS_COLUMNS COL,
ALL_CONSTRAINTS CON
WHERE COL.TABLE_NAME =
CON.TABLE_NAME
AND COL.CONSTRAINT_NAME =
CON.CONSTRAINT_NAME
AND CON.CONSTRAINT_TYPE =
'P'
AND POSITION = I
AND CON.TABLE_NAME =
ROWLCR.GET_OBJECT_NAME;
FOR J IN
1 .. NEWLIST.COUNT
LOOP
IF NEWLIST (J) IS NOT NULL
THEN
IF NEWLIST
(J
).COLUMN_NAME =
PK_TABLE
(I
THEN
UPD_QRY :=
UPD_QRY
|| ' '
|| NEWLIST
(J
).COLUMN_NAME;
UPD_QRY :=
UPD_QRY
|| ' '
|| EQUALS;
TYP :=
NEWLIST
(J
).DATA.GETTYPENAME;
IF (TYP =
'SYS.VARCHAR2'
THEN
RET :=
NEWLIST
(J
).DATA.GETVARCHAR2
(DATA1
UPD_QRY :=
UPD_QRY
|| ' '
|| ''''
|| SUBSTR
(DATA1,
0,
253
|| '''';
ELSIF (TYP =
'SYS.NUMBER'
THEN
RET :=
NEWLIST
(J
).DATA.GETNUMBER
(NUM1
UPD_QRY :=
UPD_QRY
|| ' '
|| NUM1;
END IF;
IF I <
PK_COUNT
THEN
UPD_QRY :=
UPD_QRY
|| ' AND ';
END IF;
END IF;
END IF;
END LOOP;
END LOOP;
UPD_QRY := UPD_QRY || ';';
DBMS_OUTPUT.PUT_LINE (UPD_QRY);
--Generate Update Query - End
END LOOP;
END;Thanks for you replies HTH and Dipali.
I would like to make some points clear from my side based on the issue i have raised.
1.The No Data Found error is happening on a table for which supplemental logging is enabled.
2.As per my understanding, the "Apply" process is comparing the existing data in the destination database with the "Old" data in the LCR.
Once there is a mismatch between these 2, ORA-01403 is thrown. (Please tell me whether my understanding is correct or not)
3.This mismatch can be on date field or even on the timestamp millisecond as well.
Now, the point im really wondering about :
Some how a mismatch got generated in the destination database (Not sure about the reason) and ORA-01403 is thrown.
If we could update the Destination database with the "Old" data from LCR, this mismatch should be resolved isnt it?
Reply to you Dipali :
If nothing is working out, im planning to put a conflict handler for all tables with "OVERWRITE" option. With the following script
--Generate script for applying Conflict Handler for the Tables for which Supplymentary Logging is enabled
declare
count1 number;
query varchar2(500) := null;
begin
for tables in (
select table_name from user_tables where table_name IN ("NAMES OF TABLES FOR WHICH SUPPLEMENTAL LOGGING IS ENABLED")
loop
count1 := 0;
dbms_output.put_line('DECLARE');
dbms_output.put_line('cols DBMS_UTILITY.NAME_ARRAY;');
dbms_output.put_line('BEGIN');
select max(position) into count1
from all_cons_columns col, all_constraints con
where col.table_name = con.table_name
and col.constraint_name = con.constraint_name
and con.constraint_type = 'P'
and con.table_name = tables.table_name;
for i in 1..count1
loop
query := null;
select 'cols(' || position || ')' || ' := ' || '''' || column_name || ''';'
into query
from all_cons_columns col, all_constraints con
where col.table_name = con.table_name
and col.constraint_name = con.constraint_name
and con.constraint_type = 'P'
and con.table_name = tables.table_name
and position = i;
dbms_output.put_line(query);
end loop;
dbms_output.put_line('DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(');
dbms_output.put_line('object_name => ''ICOOWR.' || tables.table_name|| ''',');
dbms_output.put_line('method_name => ''OVERWRITE'',');
dbms_output.put_line('resolution_column => ''COLM_NAME'',');
dbms_output.put_line('column_list => cols);');
dbms_output.put_line('END;');
dbms_output.put_line('/');
dbms_output.put_line('');
end loop;
end;
Reply to u HTH :
Our Destination database is a replica of the source and no triggers are running on any of these tables.
This is not the first time im facing this issue. Earlier, we had to take big outage times and clear the Replica database and apply the dump from the source...
Now i cant think about that situation. -
4G LTE data reception issue in area of work building
Hi, I'm having a data reception issue in a certain area at work. The signal indicator at the upper right of the homescreen shows "4GLTE" but this is clearly inaccurate since I am not able to navigate to websites or send/receive multimedia messages. If I move ~30 feet east in the building, the reception is restored. Two people with iPhone 5 devices have the same issue. However, the Verizon iPhone 5 allows you to turn off LTE. Once this was done and the signal fell back to 3G, reception was restored, albeit with slower speeds, but at least reception wasn't completely blocked. I understand 4G is not available in all areas, but in this case, the phone is not automatically switching to 3G and there is no workaround because there is no option to turn off LTE on the Z10. In the "Settings" -> "Network Connections" -> "Mobile Network" -> "Network Technology" dropdown, the only values are:
UMTS/GSM (when I switch to this, no networks are found)
Global (the current selection)
LTE/CDMA
This is a big problem for me because for 8+ hours in the day I can't receive MMS messages or navigate to websites.Hi, Nate650,
Sorry to hear about your problem with 4G. First, let me ask, have you updated your Z10 to the latest official software version? I had a similar problem with my Z10. After about an hour on the phone with CS, we figured out it was a problem with the tower near me. The problem was fixed by VZW and I have not had connection issues. You are right, though, about the Z10 falling back to 3G. Mine did before the update but not since.
Doc
Maybe you are looking for
-
Any update on the issues with expired passwords? We have seen the same issue other users have reported - and anxiously await some kind of real fix. Jim
-
What is Database Recycle Pool.
Hi, I was reading that we could have three buffer pool in memory depending upon the access method of objects. 1. Keep Pool - 2. Recycle Pool - This pool is used to eliminate blocks from memory which have littel chance of being re-used. 3. Default Poo
-
Compatible dvd burners with idvd?
hey everyone, in the next few weeks im going to purchase a dvd burner but i want to get one thats gonna work with idvd... these are some i found in my price range, does anyone have any person experience with them or know if theyll work with idvd? tha
-
PATA - 2 drives on own channel better than 2 drives on 1 channel?
I've decided on going with OS X software raid, so I will be using my 2 channel PATA 133 raid card simply as a 2 channel ATA controller. It would be more convenient to put the two drives for my raid 0 on 1 channel, as a master and slave. It occurred t
-
AI has crashed twice in 24 hrs - now can't open files!!
Any suggestions on how to get my file to open correctly? I was working on a logo file and also had another untitled file open that I was using to create swatches and graphic styles. When I clicked on the X to close the untitled file the whole progr