Query with aggregates over collection of trans. instances throws an error
Hi, I'm executing a query with aggregates an it throws an exception with the following message "Queries with aggregates or projections using variables currently cannot be executed in-memory. Either set the javax.jdo.option.IgnoreCache property to true, set IgnoreCache to true for this query,
set the kodo.FlushBeforeQueries property to true, or execute the query before changing any instances in the transaction.
The offending query was on type "class Pago" with filter "productosServicios.contains(item)".
The class Pago has the field productosServicios which is a List of Pago$ItemMonto, the relevant code is :
KodoQuery query = (KodoQuery)pm.newQuery(Pago.class,
pagos);
where pagos is a list of transient instances of type Pago.
query.declareVariables("Pago$ItemMonto item");
query.setFilter("productosServicios.contains(item)");
query.setGrouping("item.id");
query.setResult("item.id as idProductoServicio,
sum(montoTotal) as montoTotal");
query.setResultClass(PagoAgrupado.class);
where the class PagoAgrupado has the corresponding fields idProductoServicio and montoTotal.
In other words, I want to aggregate the id field of class ItemMonto over the instances contained in the productosServicios field of class Pago.
I have set to true the ignoreCache and kodo.FlushBeforeQueries flags in the kodo.properties file and in the instances of the pm and the query but it has not worked, what can be wrong?.
I'm using Kodo 3.2.4, MySQL 5.0
Thanks,
Jaime.
Message was edited by:
jdelajaraf
Thanks, you nailed it! I tried comparing the two files myself, but Bridge told me that the 72.009 dpi document was 72 dpi.
I have no idea why the resolution mess things up, but as long as I know how to avoid the bug, things are grand!
Similar Messages
-
Update of a table from a select query with aggregate functions.
Hello All,
I have problem here:
I have 2 tables A(a1, a2, a3, a4, a4....... ) and B( a1, a2, b1, b2, b3). I need to calculate the avg(a4-a3), Max(a4-a3) and Min(a4-a3) and insert it into table B. If the foreign keys a1, a2 already exist in table B, I need to do an update of the computed values into column b1, b2 and b3 respectively, for a1, a2.
Q1. Is it possible to do this with a single query ? I would prefer not to join A with B because the table A is very large. Also columns b1, b2 and b3 are non-nullable.
Q2. Also if a4 and a3 are timestamps what is the best way to find the average? A difference of timestamps yields INTERVAL DAY TO SECOND over which the avg function doesn't seem to work. The averages, max and min in my case would be less than a day and hence all I need is to get the data in the hh:mm:ss format.
As of now I'm using :
TO_CHAR(TO_DATE(ABS(MOD(TRUNC(AVG(extract(hour FROM (last_modified_date - created_date))*3600 +
extract( minute FROM (last_modified_date - created_date))*60 +
extract( second FROM (last_modified_date - created_date)))
),86400)),'sssss'),'hh24":"mi":"ss') AS avg_time,
But this is very long drawn. Something more compact and efficient would be nice.
Thanks in advance for your inputs.
Edited by: 847764 on Mar 27, 2011 5:35 PM847764 wrote:
Hi,
Thanks everyone for such fast replies. Malakshinov's example worked fine for me as far as updating the table goes. As for the timestamp computations, I'm posting additional info: Sorry, I don't understand.
If Malakshinov's example worked for updating the table, but you still have problems, does that mean you have to do something else besides update the table? If so, what?
Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
Here are the table details :
DESC Table A
Name Null Type
ID NOT NULL NUMBER
A1 NOT NULL VARCHAR2(4)
A2 NOT NULL VARCHAR2(40)
A3 NOT NULL VARCHAR2(40)
CREATED_DATE NOT NULL TIMESTAMP(6)
LAST_MODIFIED_DATE TIMESTAMP(6) DESCribing the tables can help clarify some things, but it's no substitute for posting CREATE TABLE and INSERT statements. With only a description of the table, nobody can re-create the problem or test their ideas. Please post CREATE TABLE and INSERT statements for both tables as they exist before the MERGE. If table b doen't contain any rows before the MERGE, then just say so, but you still need to post a CREATE TABLE statement for both tables, and INSERT statements for table a.
The objective is to compute the response times : avg (LAST_MODIFIED_DATE - CREATED_DATE), max (LAST_MODIFIED_DATE - CREATED_DATE) and min (LAST_MODIFIED_DATE - CREATED_DATE) grouped by A1 and A2 and store it in table B under AVG_T, MAX_T and MIN_T. Since AVG_T, MAX_T and MIN_T are only used for reporting purposes we have kept it as Varchar (though I think keeping it as timestamp would make more sense). I think a NUMBER would make more sense (the number of minutes, for example), or perhaps an INTERVAL DAY TO SECOND. If you stored a NUMBER, it would be easy to compute averages.
In table B the times are stored in the format : hh:mm:ss. We don't need milliseconds precision. If you don;'t need milliseconds, then you should use DATE instead of TIMESTAMP. The functions for manipulating DATEs are much better.
Hence I was calculating is as follows:
-- Avg Time
TO_CHAR(TO_DATE(ABS(MOD(TRUNC(AVG(extract(hour FROM (last_modified_date - created_date))*3600 +
extract( minute FROM (last_modified_date - created_date))*60 +
extract( second FROM (last_modified_date - created_date)))
),86400)),'sssss'),'hh24":"mi":"ss') AS avg_time,
--Max Time
extract (hour FROM MAX(last_modified_date - created_date))||':'||extract (minute FROM MAX(last_modified_date - created_date))||':'||TRUNC(extract (second FROM MAX(last_modified_date - created_date))) AS max_time,
--Min Time
extract (hour FROM MIN(last_modified_date - created_date))||':'||extract (minute FROM MIN(last_modified_date - created_date))||':'||TRUNC(extract (second FROM MIN(last_modified_date - created_date))) AS min_timeIs this something that has to be done before or after the MERGE?
Post the complete statement.
Is this part of a query? Where's the SELECT keyword?
Is this part of a DML operation? Where's the INSERT, or UPDATE, or MERGE keyword?
What are the exact results you want from this? Explain how you get those results.
Is the code above getting the right results? Are you just asking if there's a better way to get the same results?
You have to explain things very carefully. None of the people who want to help you are familiar with your application, or your needs.
I just noticed that my reply is horribly formatted - apologies! I'm just getting the hang of it.Whenever you post formatted text (such as query results) on this site, type these 6 characters:
\(small letters only, inside curly brackets) before and after each section of formatted text, to preserve spacing. -
Query with aggregate on custom mapping returning wrong type
I've got a JDOQL query that returns the sum of a single column, where that
column is custom-mapped, but the result I get back is losing precision.
I create the JDOQL query as normal and set the result to the aggregate
expression:
KodoQuery query = (KodoQuery) pm.newQuery(candidateClass, filter);
query.setResult("sum(amount)");
I can also setUnique for good measure as I am expecting just 1 row back:
query.setUnique(true);
The query returns an Integer, but my amount column is a decimal with 5
digits after the decimal point. If I ask for a Double or BigDecimal as the
resultClass, it does return an object of that type, but loses all
precision after the decimal point:
query.setResultClass(BigDecimal.class);
The amount field in my candidate class is of the class Money, a class that
encapsulates a currency and a BigDecimal amount. See
http://www.martinfowler.com/ap2/quantity.html
It is mapped as a custom money mapping to an amount and currency column,
based on the custom mapping in the Kodo examples. I have tried mapping the
amount as a BigDecimal value, and querying the sum of this works. So the
problem seems to be the aggregate query on my custom mapping. Do I need to
write some code for my custom mapping to be able to handle aggregates?
Thanks,
AlexCan you post your custom mapping?
Also, does casting the value have any effect?
q.setResult ("sum((BigDecimal) amount)"); -
Materialized view with aggregates doing a fast refresh
why is that i need to have count(*),count(<expressions used>) in my Materialized view Query with Aggregates?
say mat view query is:
select deptno,sum(sal) from emp,dept where emp.deptno,dept.deptno group by dname.cant do a fast refresh.
BUT
select deptno,sum(sal),count(*),count(sal) from emp,dept where emp.deptno,dept.deptno group by dname.Does a fast refresh.Why?
Also its mentioned in manuals that count(*) and count(expr) is needed but it doesnt explain why.
ThanksThanks for the correction.I just wanted to simulate the query and it was a typing mistake.sorry for that.
My query working fine with count(). If i understand it correctly it is to determine whether there should be an update or delete to MV in case of say delete on master table.that is, count is decremented on delete and if it becomes 0 then we need to delete that aggregated row from the MV,else it need to be updated even in case of delete.
But this answers why count() is needed for coulmns in group by clause.
Dont really see a need to have count() in case i m updating the measures as materilized view logs should take care of it. -
Please help with an embedded query (INSERT RETURNING BULK COLLECT INTO)
I am trying to write a query inside the C# code where I would insert values into a table in bulk using bind variables. But I also I would like to receive a bulk collection of generated sequence number IDs for the REQUEST_ID. I am trying to use RETURNING REQUEST_ID BULK COLLECT INTO :REQUEST_IDs clause where :REQUEST_IDs is another bind variable
Here is a full query that use in the C# code
string sql = "INSERT INTO REQUESTS_TBL(REQUEST_ID, CID, PROVIDER_ID, PROVIDER_NAME, REQUEST_TYPE_ID, REQUEST_METHOD_ID, " +
"SERVICE_START_DT, SERVICE_END_DT, SERVICE_LOCATION_CITY, SERVICE_LOCATION_STATE, " +
"BENEFICIARY_FIRST_NAME, BENEFICIARY_LAST_NAME, BENEFICIARY_DOB, HICNUM, CCN, " +
"CLAIM_RECEIPT_DT, ADMISSION_DT, BILL_TYPE, LANGUAGE_ID, CONTRACTOR_ID, PRIORITY_ID, " +
"UNIVERSE_DT, REQUEST_DT, BENEFICIARY_M_INITIAL, ATTENDING_PROVIDER_NUMBER, " +
"BILLING_NPI, BENE_ZIP_CODE, DRG, FINAL_ALLOWED_AMT, STUDY_ID, REFERRING_NPI) " +
"VALUES " +
"(SQ_CDCDATA.NEXTVAL, :CIDs, :PROVIDER_IDs, :PROVIDER_NAMEs, :REQUEST_TYPE_IDs, :REQUEST_METHOD_IDs, " +
":SERVICE_START_DTs, :SERVICE_END_DTs, :SERVICE_LOCATION_CITYs, :SERVICE_LOCATION_STATEs, " +
":BENEFICIARY_FIRST_NAMEs, :BENEFICIARY_LAST_NAMEs, :BENEFICIARY_DOBs, :HICNUMs, :CCNs, " +
":CLAIM_RECEIPT_DTs, :ADMISSION_DTs, :BILL_TYPEs, :LANGUAGE_IDs, :CONTRACTOR_IDs, :PRIORITY_IDs, " +
":UNIVERSE_DTs, :REQUEST_DTs, :BENEFICIARY_M_INITIALs, :ATTENDING_PROVIDER_NUMBERs, " +
":BILLING_NPIs, :BENE_ZIP_CODEs, :DRGs, :FINAL_ALLOWED_AMTs, :STUDY_IDs, :REFERRING_NPIs) " +
" RETURNING REQUEST_ID BULK COLLECT INTO :REQUEST_IDs";
int[] REQUEST_IDs = new int[range];
cmd.Parameters.Add(":REQUEST_IDs", OracleDbType.Int32, REQUEST_IDs, System.Data.ParameterDirection.Output);
However, when I run this query, it gives me a strange error ORA-00925: missing INTO keyword. I am not sure what that error means since I am not missing any INTOs
Please help me resolve this error or I would appreciate a different solution
Thank youIt seems you are not doing a bulk insert but rather an array bind.
(Which you will also find that it is problematic to do an INSERT with a bulk collect returning clause (while this works just fine for update/deletes) :
http://www.oracle-developer.net/display.php?id=413)
But you are using array bind, so you simply just need to use a
... Returning REQUEST_ID INTO :REQUEST_IDand that'll return you a Rquest_ID[]
see below for a working example (I used a procedure but the result is the same)
//Create Table Zzztab(Deptno Number, Deptname Varchar2(50) , Loc Varchar2(50) , State Varchar2(2) , Idno Number(10)) ;
//create sequence zzzseq ;
//CREATE OR REPLACE PROCEDURE ZZZ( P_DEPTNO IN ZZZTAB.DEPTNO%TYPE,
// P_DEPTNAME IN ZZZTAB.DEPTNAME%TYPE,
// P_LOC IN ZZZTAB.LOC%TYPE,
// P_State In Zzztab.State%Type ,
// p_idno out zzztab.idno%type
// IS
//Begin
// Insert Into Zzztab (Deptno, Deptname, Loc, State , Idno)
// Values (P_Deptno, P_Deptname, P_Loc, P_State, Zzzseq.Nextval)
// returning idno into p_idno;
//END ZZZ;
//Drop Procedure Zzz ;
//Drop Sequence Zzzseq ;
//drop Table Zzztab;
class ArrayBind
static void Main(string[] args)
// Connect
string connectStr = GetConnectionString();
// Setup the Tables for sample
Setup(connectStr);
// Initialize array of data
int[] myArrayDeptNo = new int[3]{1, 2, 3};
String[] myArrayDeptName = {"Dev", "QA", "Facility"};
String[] myArrayDeptLoc = {"New York", "Chicago", "Texas"};
String[] state = {"NY","IL","TX"} ;
OracleConnection connection = new OracleConnection(connectStr);
OracleCommand command = new OracleCommand (
"zzz", connection);
command.CommandType = CommandType.StoredProcedure;
// Set the Array Size to 3. This applied to all the parameter in
// associated with this command
command.ArrayBindCount = 3;
command.BindByName = true;
// deptno parameter
OracleParameter deptNoParam = new OracleParameter("p_deptno",OracleDbType.Int32);
deptNoParam.Direction = ParameterDirection.Input;
deptNoParam.Value = myArrayDeptNo;
command.Parameters.Add(deptNoParam);
// deptname parameter
OracleParameter deptNameParam = new OracleParameter("p_deptname", OracleDbType.Varchar2);
deptNameParam.Direction = ParameterDirection.Input;
deptNameParam.Value = myArrayDeptName;
command.Parameters.Add(deptNameParam);
// loc parameter
OracleParameter deptLocParam = new OracleParameter("p_loc", OracleDbType.Varchar2);
deptLocParam.Direction = ParameterDirection.Input;
deptLocParam.Value = myArrayDeptLoc;
command.Parameters.Add(deptLocParam);
//P_STATE -- -ARRAY
OracleParameter stateParam = new OracleParameter("P_STATE", OracleDbType.Varchar2);
stateParam.Direction = ParameterDirection.Input;
stateParam.Value = state;
command.Parameters.Add(stateParam);
//idParam-- ARRAY
OracleParameter idParam = new OracleParameter("p_idno", OracleDbType.Int64 );
idParam.Direction = ParameterDirection.Output ;
idParam.OracleDbTypeEx = OracleDbType.Int64;
command.Parameters.Add(idParam);
try
connection.Open();
command.ExecuteNonQuery ();
Console.WriteLine("{0} Rows Inserted", command.ArrayBindCount);
//now cycle through the output param array
foreach (Int64 i in (Int64[])idParam.Value)
Console.WriteLine(i);
catch (Exception e)
Console.WriteLine("Execution Failed:" + e.Message);
finally
// connection, command used server side resource, dispose them
// asap to conserve resource
connection.Close();
command.Dispose();
connection.Dispose();
Console.WriteLine("Press Enter to finish");
Console.ReadKey();
} -
Need complex query with joins and AGGREGATE functions.
Hello Everyone ;
Good Morning to all ;
I have 3 tables with 2 lakhs record. I need to check query performance.. How CBO rewrites my query in materialized view ?
I want to make complex join with AGGREGATE FUNCTION.
my table details
SQL> select from tab;*
TNAME TABTYPE CLUSTERID
DEPT TABLE
PAYROLL TABLE
EMP TABLE
SQL> desc emp
Name
EID
ENAME
EDOB
EGENDER
EQUAL
EGRADUATION
EDESIGNATION
ELEVEL
EDOMAIN_ID
EMOB_NO
SQL> desc dept
Name
EID
DNAME
DMANAGER
DCONTACT_NO
DPROJ_NAME
SQL> desc payroll
Name
EID
PF_NO
SAL_ACC_NO
SALARY
BONUS
I want to make complex query with joins and AGGREGATE functions.
Dept names are : IT , ITES , Accounts , Mgmt , Hr
GRADUATIONS are : Engineering , Arts , Accounts , business_applications
I want to select records who are working in IT and ITES and graduation should be "Engineering"
salary > 20000 and < = 22800 and bonus > 1000 and <= 1999 with count for males and females Separately ;
Please help me to make a such complex query with joins ..
Thanks in advance ..
Edited by: 969352 on May 25, 2013 11:34 AM969352 wrote:
why do you avoid providing requested & NEEDED details?I do NOT understand what do you expect ?
My Goal is :
1. When executing my own query i need to check expalin plan.please proceed to do so
http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_9010.htm#SQLRF01601
2. IF i enable query rewrite option .. i want to check explain plan ( how optimizer rewrites my query ) ? please proceed to do so
http://docs.oracle.com/cd/E11882_01/server.112/e16638/ex_plan.htm#PFGRF009
3. My only aim is QUERY PERFORMANCE with QUERY REWRITE clause in materialized view.It is an admirable goal.
Best Wishes on your quest for performance improvements. -
SQL Report query with condition (multiple parameters) in apex item?
Hello all,
I have a little problem and can't find a solution.
I need to create reports based on a SQL query or I.R. Nothing hard there.
Then I need to add the WHERE clause dynamically with javascript from an Apex item.
Again not very hard. I defined an Apex item, set my query like this "SELECT * FROM MYTAB WHERE COL1 = :P1_SEARCH" and then I call the page setting the P1_SEARCH value. For instance COL1 is rowid. It works fine.
But here is my problem. Let's consider that P1_SEARCH will contain several rowids and that I don't know the number of those values,
(no I won't create a lot of items and build a query with so many OR!), I would like sotheming like "SELECT * FROM MYTAB WHERE ROWID IN (:P1_SEARCH) with something like : ROWID1,ROWID2 in P1_SEARCH.
I also tried : 'ROWID1,ROWID2' and 'ROWID1','ROWID2'
but I can't get anything else than filter error. It works with IN with one value but as soon as there are two values or more, it seems than Apex can't read the string.
How could I do that, please?
Thanks for your help.
Maxmnoscars wrote:
But here is my problem. Let's consider that P1_SEARCH will contain several rowids and that I don't know the number of those values,
(no I won't create a lot of items and build a query with so many OR!), I would like sotheming like "SELECT * FROM MYTAB WHERE ROWID IN (:P1_SEARCH) with something like : ROWID1,ROWID2 in P1_SEARCH.
I also tried : 'ROWID1,ROWID2' and 'ROWID1','ROWID2'
but I can't get anything else than filter error. It works with IN with one value but as soon as there are two values or more, it seems than Apex can't read the string.For a standard report, see +{message:id=9609120}+
For an IR—and improved security avoiding the risk of SQL Injection—use a <a href="http://download.oracle.com/docs/cd/E17556_01/doc/apirefs.40/e15519/apex_collection.htm#CACFAICJ">collection</a> containing the values in a column instead of a CSV list:
{code}
SELECT * FROM MYTAB WHERE ROWID IN (SELECT c001 FROM apex_collections WHERE collection_name = 'P1_SEARCH')
{code}
(Please close duplicate threads spawned by your original question.) -
Unable to copy database with different name in the same instance
I had a huge database and wanted to try some change optimization changes.
So wanted to make a copy of the database along with data in the same instance.
I have tried copy database wizard several times but always see the error as in attachment.
Can someone let me know how to troubleshoot further?
If this is not the correct way please suggest how do i copy a database with different name in the same instance along with data.Hi Nandu,
From the screenshot, the error 1813 happens when corrupt database log is attempted to attach to the SQL Server. To work around this issue, please preform the following steps, for more details, please review this
blog.
1. Create a new database with same name which you want to recover. Make sure that the MDF file and LDF file have same name with previous database data and log file.
2. Stop SQL Server. Move original MDF file from old location to new location by replacing just created MDF file. Delete the LDF file of new location just created.
3. Start SQL Server. At this point, the database is in suspect status.
4. Make sure that system tables of Master database allows to update the values. Please note that you will be performing this in query window.
Use Master
go
sp_configure 'allow updates',1
reconfigure with override
go
5. Change database mode to emergency mode.
SELECT *
FROM sysdatabases
WHERE name = 'DatabaseName'
BEGIN
UPDATE sysdatabases
SET status = 32768
WHERE name = ' DatabaseName '
COMMIT TRAN
6. Restart SQL Server. Then execute the following DBCC command in query window to create new log file.
DBCC TRACEON (3604)
DBCC REBUILD_LOG(databasename,'c:\yourdatabasename_log.ldf')
GO
7. Reset the database status using following command.
sp_RESETSTATUS yourdatabasename
GO
8. Turn off the update to system tables of Master database running following script.
USE MASTER
GO
sp_CONFIGURE 'allow updates',0
RECONFIGURE WITH OVERRIDE
GO
9. Reset the database status to previous status.
BEGIN
UPDATE sysdatabases
SET status = (value retrieved in first query of step 5)
WHERE name = 'DatabaseName‘
COMMIT TRAN
GO
Make sure that you have done all the steps in order and restarted SQL Server where it is mentioned. Also run SQL Server Management Studio as administrator.(Right click-> Run as Administrator)
Thanks,
Lydia Zhang -
Query vs Toplink managed collection and cascade persist
A fairly simple situation: I have a 1-N relation which is managed by Toplink: 1 relation can have N calendars (I know, badly chosen class name, but alas).
If I access the collection through the Toplink managed collection, make a change to one of the calendars and then merge the relation, the change in the calendar instance automatically is detected and also persisted.
However, if I use a query (because I do not need all calendars) to find the same instance and make the same change, then it is not persisted. Appearantly the "cascade persist" is not done here.
There are a few ways around this:
1. fetch the original collection and by compare-and-remove emulate the query
2. do a setRelation(null) and then setRelation(xxx) of the relation
3. do a merge inside the transaction (a merge outside does not work)
The funny thing is, workaround #2 really sets the same relation again!
Is there a way to have the result of a query also cascade persist?Well, I do not want to do it in a transaction, because then the changes are written to the database immediately and that will result in all kind of locking problems (this is a fat client situation). What want is fairly simple: the user modifies entities in an object graph in memory and at the end of his work either presses "cancel" and clears all changes, or presses "save" and stores all changes. When he presses "save" I expect the EM to persist every changed entity.
This approach works ok for all scenario's I have implemented up until now. The current one is different in that I get related entities not by traveling the object graph (so via Cascade.PERSIST collections), but via a query. There is one major difference between these two: the entities from the collections are automatically persisted, the ones from a query are not.. BUT they are -for all means and purposes- identical. Specifically: the collection gives me ALL calendars associated with the relation, the query only those from a timespan but still associated with the relation.
For some reason I expected the entities to also auto-persist, BECAUSE they also are present in the collection.
Ok then, so I understand that entities fetched through a query are unrelated to any other entity, even though they also exist in a Cascade.PERSIST collection. (I still have to test what happens if I, after the query, also access the collection: will the same object be present?)
That being as it as, I need to merge each queries entity separate and thus I expect the EM to remember any entities merged outside a transaction, but it does not. That I do not understand.
Now, I already have a patched / extended EM because of a strange behavior in the remove vs clear dynamics, so this was a minor add-on and works perfectly (so far ;-). But if you have a better idea how to remember changes to entities, which are to be merged upon transaction start... Please! -
SQL query with Bind variable with slower execution plan
I have a 'normal' sql select-insert statement (not using bind variable) and it yields the following execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=148)
1 0 HASH JOIN (Cost=7 Card=1 Bytes=148)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=4 Card=1 Bytes=100)
3 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=3 Card=1)
4 1 INDEX (FAST FULL SCAN) OF 'TABLEB_IDX_003' (NON-UNIQUE)
(Cost=2 Card=135 Bytes=6480)
Statistics
0 recursive calls
18 db block gets
15558 consistent gets
47 physical reads
9896 redo size
423 bytes sent via SQL*Net to client
1095 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
I have the same query but instead running using bind variable (I test it with both oracle form and SQL*plus), it takes considerably longer with a different execution plan:-
Execution Plan
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=407 Card=1 Bytes=148)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'TABLEA' (Cost=3 Card=1 Bytes=100)
2 1 NESTED LOOPS (Cost=407 Card=1 Bytes=148)
3 2 INDEX (FAST FULL SCAN) OF TABLEB_IDX_003' (NON-UNIQUE) (Cost=2 Card=135 Bytes=6480)
4 2 INDEX (RANGE SCAN) OF 'TABLEA_IDX_2' (NON-UNIQUE) (Cost=2 Card=1)
Statistics
0 recursive calls
12 db block gets
3003199 consistent gets
54 physical reads
9448 redo size
423 bytes sent via SQL*Net to client
1258 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
55 rows processed
TABLEA has around 3million record while TABLEB has 300 records. Is there anyway I can improve the speed of the sql query with bind variable? I have DBA Access to the database
Regards
IvanMany thanks for your reply.
I have run the statistic already for the both tableA and tableB as well all the indexes associated with both table (using dbms_stats, I am on 9i db ) but not the indexed columns.
for table I use:-
begin
dbms_stats.gather_table_stats(ownname=> 'IVAN', tabname=> 'TABLEA', partname=> NULL);
end;
for index I use:-
begin
dbms_stats.gather_index_stats(ownname=> 'IVAN', indname=> 'TABLEB_IDX_003', partname=> NULL);
end;
Is it possible to show me a sample of how to collect statisc for INDEX columns stats?
regards
Ivan -
BI server generating query in a different way between two instances
Hi All,
We have executed a report in dev,test instances,BI server generated query in a different way in two instances where as dev BI server is on AIX operating system(recently we migrated from windows),test instance is on still on Windows environment.
For a report below are the queries
DEV(AIX)
WITH
SAWITH0 AS (select sum(T316025.SALES_QUOTA) as c1,
T329697.DIVISION_DESC as c2,
T329697.AREA_DESC as c3,
T329697.TERRITORY_DESC as c4,
case when T329697.ACCOUNT_NUM is null then T329697.BILL_TO_PARTY_NAME else concat(concat(concat(T329697.BILL_TO_PARTY_NAME, '('), T329697.ACCOUNT_NUM), ')') end as c5,
T150993.X_CONS_MAJOR_GROUP as c6,
T66755.PER_NAME_ENT_YEAR as c7
from
W_DAY_D T66755 /* Dim_W_DAY_D_Common */ ,
W_PRODUCT_D T67704 /* Dim_W_PRODUCT_D */ ,
WC_SLX_DATA_F T316025 /* Fact_WC_SLX_DATA_F */ ,
WC_CUSTOMER_HIERARCHY_D T329697 /* Dim_WC_Customer_Hierarchy_D_With_Error */ ,
OBIEE_SECURITY_LOCATION_SALES T339204,
W_PROD_CAT_DH T150993 /* Dim_W_PROD_CAT_DH_General */
where ( T66755.ROW_WID = T316025.DAY_WID and T316025.CUSTOMER_HIERARCHY_WID = T329697.ROW_WID and T67704.ROW_WID = T316025.PRODUCT_WID and T67704.PROD_CAT2_WID = T150993.ROW_WID and T329697.TERRITORY_CODE = nvl(T339204.LOCATION , T329697.TERRITORY_CODE) and T329697.AREA_DESC = 'GROCERY AREA - EAST' and T329697.DIVISION_DESC = 'DOMESTIC SALES DIVISION' and T339204.USER_NAME = upper('Administrator') and case when T329697.ACCOUNT_NUM is null then T329697.BILL_TO_PARTY_NAME else concat(concat(concat(T329697.BILL_TO_PARTY_NAME, '('), T329697.ACCOUNT_NUM), ')') end = 'JETRO CASH AND CARRY ENTERPRISES INC(10313)' and (T66755.PER_NAME_ENT_YEAR in ('2011', '2012')) and (T329697.TERRITORY_DESC in ('BOSTON', 'CHARLOTTE', 'FLORIDA', 'GREAT LAKES', 'MID-SOUTH', 'NEW YORK', 'WHITE ROSE')) )
group by T66755.PER_NAME_ENT_YEAR, T150993.X_CONS_MAJOR_GROUP, T329697.TERRITORY_DESC, T329697.AREA_DESC, T329697.DIVISION_DESC, case when T329697.ACCOUNT_NUM is null then T329697.BILL_TO_PARTY_NAME else concat(concat(concat(T329697.BILL_TO_PARTY_NAME, '('), T329697.ACCOUNT_NUM), ')') end )
select distinct SAWITH0.c2 as c1,
SAWITH0.c3 as c2,
SAWITH0.c4 as c3,
SAWITH0.c5 as c4,
SAWITH0.c5 as c5,
SAWITH0.c6 as c6,
SAWITH0.c7 as c7,
SAWITH0.c1 as c8
from
SAWITH0
order by c1, c6
Test(Windows)
select distinct D1.c2 as c1,
D1.c3 as c2,
D1.c4 as c3,
D1.c5 as c4,
D1.c5 as c5,
D1.c6 as c6,
D1.c7 as c7,
D1.c1 as c8
from
(select sum(T316025.SALES_QUOTA) as c1,
T329697.DIVISION_DESC as c2,
T329697.AREA_DESC as c3,
T329697.TERRITORY_DESC as c4,
case when T329697.ACCOUNT_NUM is null then T329697.BILL_TO_PARTY_NAME else concat(concat(concat(T329697.BILL_TO_PARTY_NAME, '('), T329697.ACCOUNT_NUM), ')') end as c5,
T150993.X_CONS_MAJOR_GROUP as c6,
T66755.PER_NAME_ENT_YEAR as c7
from
W_DAY_D T66755 /* Dim_W_DAY_D_Common */ ,
W_PRODUCT_D T67704 /* Dim_W_PRODUCT_D */ ,
WC_SLX_DATA_F T316025 /* Fact_WC_SLX_DATA_F */ ,
WC_CUSTOMER_HIERARCHY_D T329697 /* Dim_WC_Customer_Hierarchy_D_With_Error */ ,
OBIEE_SECURITY_LOCATION_SALES T339204,
W_PROD_CAT_DH T150993 /* Dim_W_PROD_CAT_DH_General */
where ( T66755.ROW_WID = T316025.DAY_WID and T316025.CUSTOMER_HIERARCHY_WID = T329697.ROW_WID and T67704.ROW_WID = T316025.PRODUCT_WID and T67704.PROD_CAT2_WID = T150993.ROW_WID and T329697.TERRITORY_CODE = nvl(T339204.LOCATION , T329697.TERRITORY_CODE) and T329697.AREA_DESC = 'GROCERY AREA - EAST' and T329697.DIVISION_DESC = 'DOMESTIC SALES DIVISION' and T339204.USER_NAME = upper('Administrator') and case when T329697.ACCOUNT_NUM is null then T329697.BILL_TO_PARTY_NAME else concat(concat(concat(T329697.BILL_TO_PARTY_NAME, '('), T329697.ACCOUNT_NUM), ')') end = 'JETRO CASH AND CARRY ENTERPRISES INC(10313)' and (T66755.PER_NAME_ENT_YEAR in ('2011', '2012')) and (T329697.TERRITORY_DESC in ('BOSTON', 'CHARLOTTE', 'FLORIDA', 'GREAT LAKES', 'MID-SOUTH', 'NEW YORK', 'WHITE ROSE')) )
group by T66755.PER_NAME_ENT_YEAR, T150993.X_CONS_MAJOR_GROUP, T329697.TERRITORY_DESC, T329697.AREA_DESC, T329697.DIVISION_DESC, case when T329697.ACCOUNT_NUM is null then T329697.BILL_TO_PARTY_NAME else concat(concat(concat(T329697.BILL_TO_PARTY_NAME, '('), T329697.ACCOUNT_NUM), ')') end
) D1
order by c1, c6
If we observe test query is very simple and easy to back track but in Dev it is appending like sawwith0,sawwith1 etc..looking difficult.
Is there in any configuration to change to generate query like Test(with out SAWWITH0).
NOTE:Any how results are same in both instances.
Please help me to resolve this issue.
Thank You,
Anil Kumar.Anil,
Are your database settings the same in both of the rpds dev and test?
Check wether you didn't change anything to the default. Open your RDP double click on the Database in the physical layer and go to Feature you can check all the settings there
Adil -
How do I create a folder or report from a query with a union and parameters
I have created folders with unions but I am having difficulty coverting a query with a union and parameters.
The following works great in SQL*Developer without parameters, but I want to change to use parameters for the year and quarter and use it in Discoverer:
SELECT TO_CHAR(NVL(AV.TAX_ID,999999999),'000000000') FEID,
AV.FIRM_NAME VENDOR_NAME,
AV.BIDCLASS CONTRACT_CODES,
AV.AWAMT AWARD_AMOUNT,
AV.SOL_MODE FORMAL_INFORMAL,
AV.CERT BUSINESS_ENTITY,
AV.ETHNICITY ETHNICTY,
AV.PO_NUMBER_FORMAT CONTRACT,
SUM(VP.INVOICE_AMOUNT) AMOUNT_PAID_$
FROM CONFIRM.VSTATE_PAID_AWARD_VENDORS AV,
CONFIRM.VSTATE_VENDOR_PAYMENTS VP
WHERE ( ( AV.PO_NUMBER = VP.PO_NUMBER
AND AV.VENDOR_ID = VP.VENDOR_ID ) )
AND (TO_CHAR(VP.PAYMENT_DATE,'Q') = '4')
AND ( TO_CHAR(VP.PAYMENT_DATE,'YYYY') = '2009' )
GROUP BY TO_CHAR(NVL(AV.TAX_ID,999999999),'000000000'),
AV.FIRM_NAME,
AV.BIDCLASS,
AV.AWAMT,
AV.SOL_MODE,
AV.CERT,
AV.ETHNICITY,
AV.PO_NUMBER_FORMAT
union
SELECT TO_CHAR(NVL(AV2.TAX_ID,999999999),'000000000') FEID,
AV2.FIRM_NAME VENDOR_NAME,
AV2.BIDCLASS CONTRACT_CODES,
AV2.AWAMT AWARD_AMOUNT,
AV2.SOL_MODE FORMAL_INFORMAL,
AV2.CERT BUSINESS_ENTITY,
AV2.ETHNICITY ETHNICTY,
AV2.PO_NUMBER_FORMAT CONTRACT,
0 AMOUNT_PAID_$
FROM CONFIRM.VSTATE_PAID_AWARD_VENDORS AV2
WHERE
not exists (SELECT 'X'
FROM CONFIRM.VSTATE_VENDOR_PAYMENTS VP2
WHERE av2.po_number = vp2.po_number
AND (TO_CHAR(VP2.PAYMENT_DATE,'Q') = '4')
AND ( TO_CHAR(VP2.PAYMENT_DATE,'YYYY') = '2009' ))
AND (TO_CHAR(AV2.AWDATE,'Q') = '4')
AND (to_CHAR(AV2.AWDATE,'YYYY') = '2009')
GROUP BY TO_CHAR(NVL(AV2.TAX_ID,999999999),'000000000'),
AV2.FIRM_NAME,
AV2.BIDCLASS,
AV2.AWAMT,
AV2.SOL_MODE,
AV2.CERT,
AV2.ETHNICITY,
AV2.PO_NUMBER_FORMAT Can someone provide a solution?
Thank you,
RobertHi,
You can take the parameters to the select so that you will be able to create conditions over them.
Try to use this SQL instead of your and in the discoverer workbook create the conditions and parameters:
SELECT TO_CHAR(NVL(AV.TAX_ID,999999999),'000000000') FEID,
AV.FIRM_NAME VENDOR_NAME,
AV.BIDCLASS CONTRACT_CODES,
AV.AWAMT AWARD_AMOUNT,
AV.SOL_MODE FORMAL_INFORMAL,
AV.CERT BUSINESS_ENTITY,
AV.ETHNICITY ETHNICTY,
AV.PO_NUMBER_FORMAT CONTRACT,
TO_CHAR(VP.PAYMENT_DATE,'YYYY') P_YEAR,
TO_CHAR(VP.PAYMENT_DATE,'Q') P_QTR
SUM(VP.INVOICE_AMOUNT) AMOUNT_PAID_$
FROM CONFIRM.VSTATE_PAID_AWARD_VENDORS AV,
CONFIRM.VSTATE_VENDOR_PAYMENTS VP
WHERE ( ( AV.PO_NUMBER = VP.PO_NUMBER
AND AV.VENDOR_ID = VP.VENDOR_ID ) )
--AND (TO_CHAR(VP.PAYMENT_DATE,'Q') = '4')*
--AND ( TO_CHAR(VP.PAYMENT_DATE,'YYYY') = '2009' )*
GROUP BY TO_CHAR(NVL(AV.TAX_ID,999999999),'000000000'),
AV.FIRM_NAME,
AV.BIDCLASS,
AV.AWAMT,
AV.SOL_MODE,
AV.CERT,
AV.ETHNICITY,
AV.PO_NUMBER_FORMAT ,
TO_CHAR(VP.PAYMENT_DATE,'YYYY') P_YEAR,
TO_CHAR(VP.PAYMENT_DATE,'Q') P_QTR
union
SELECT TO_CHAR(NVL(AV2.TAX_ID,999999999),'000000000') FEID,
AV2.FIRM_NAME VENDOR_NAME,
AV2.BIDCLASS CONTRACT_CODES,
AV2.AWAMT AWARD_AMOUNT,
AV2.SOL_MODE FORMAL_INFORMAL,
AV2.CERT BUSINESS_ENTITY,
AV2.ETHNICITY ETHNICTY,
AV2.PO_NUMBER_FORMAT CONTRACT,
TO_CHAR(VP.PAYMENT_DATE,'YYYY') P_YEAR,
TO_CHAR(VP.PAYMENT_DATE,'Q') P_QTR
0 AMOUNT_PAID_$
FROM CONFIRM.VSTATE_PAID_AWARD_VENDORS AV2
WHERE
not exists (SELECT 'X'
FROM CONFIRM.VSTATE_VENDOR_PAYMENTS VP2
WHERE av2.po_number = vp2.po_number
AND (TO_CHAR(VP2.PAYMENT_DATE,'Q') = TO_CHAR(VP.PAYMENT_DATE,'Q') )
AND ( TO_CHAR(VP2.PAYMENT_DATE,'YYYY') = TO_CHAR(VP.PAYMENT_DATE,'YYYY') ))
--AND (TO_CHAR(AV2.AWDATE,'Q') = '4')*
--AND (to_CHAR(AV2.AWDATE,'YYYY') = '2009')*
GROUP BY TO_CHAR(NVL(AV2.TAX_ID,999999999),'000000000'),
AV2.FIRM_NAME,
AV2.BIDCLASS,
AV2.AWAMT,
AV2.SOL_MODE,
AV2.CERT,
AV2.ETHNICITY,
AV2.PO_NUMBER_FORMAT,
TO_CHAR(VP.PAYMENT_DATE,'YYYY') P_YEAR,
TO_CHAR(VP.PAYMENT_DATE,'Q') P_QTR
Tamir -
Issue with SQL Query with Presentation Variable as Data Source in BI Publisher
Hello All
I have an issue with creating BIP report based on OBIEE reports which is done using direct SQL. There is this one report in OBIEE dashboard, which is written using direct SQL. To create the pixel perfect version of this report, I am creating BIP data model using SQL Query as data source. The physical query that is used to create OBIEE report has several presentation variables in its where clause.
select TILE4,max(APPTS), 'Top Count' from
SELECT c5 as division,nvl(DECODE (C2,0,0,(c1/c2)*100),0) AS APPTS,NTILE (4) OVER ( ORDER BY nvl(DECODE (C2,0,0,(c1/c2)*100),0)) AS TILE4,
c4 as dept,c6 as month FROM
select sum(case when T6736.TYPE = 'ATM' then T7608.COUNT end ) as c1,
sum(case when T6736.TYPE in ('Call Center', 'LSM') then T7608.CONFIRMED_COUNT end ) as c2,
T802.NAME_LEVEL_6 as c3,
T802.NAME_LEVEL_1 as c4,
T6172.CALENDARMONTHNAMEANDYEAR as c5,
T6172.CALENDARMONTHNUMBERINYEAR as c6,
T802.DEPT_CODE as c7
from
DW_date_DIM T6736 /* z_dim_date */ ,
DW_MONTH_DIM T6172 /* z_dim_month */ ,
DW_GEOS_DIM T802 /* z_dim_dept_geo_hierarchy */ ,
DW_Count_MONTH_AGG T7608 /* z_fact_Count_month_agg */
where ( T802.DEpt_CODE = T7608.DEPT_CODE and T802.NAME_LEVEL_1 = '@{PV_D}{RSD}'
and T802.CALENDARMONTHNAMEANDYEAR = 'July 2013'
and T6172.MONTH_KEY = T7608.MONTH_KEY and T6736.DATE_KEY = T7608.DATE_KEY
and (T6172.CALENDARMONTHNUMBERINYEAR between substr('@{Month_Start}',0,6) and substr('@{Month_END}',8,13))
and (T6736.TYPE in ('Call Center', 'LSM')) )
group by T802.DEPT_CODE, T802.NAME_LEVEL_6, T802.NAME_LEVEL_1, T6172.CALENDARMONTHNAMEANDYEAR, T6172.CALENDARMONTHNUMBERINYEAR
order by c4, c3, c6, c7, c5
))where tile4=3 group by tile4
When I try to view data after creating the data set, I get the following error:
Failed to load XML
XML Parsing Error: mismatched tag. Expected: . Location: http://172.20.17.142:9704/xmlpserver/servlet/xdo Line Number 2, Column 580:
Now when I remove those Presention variables (@{PV1}, @{PV2}) in the query with some hard coded values, it is working fine.
So I know it is the PV that's causing this error.
How can I work around it?
There is no way to create equivalent report without using the direct sql..
Thanks in advanceI have found a solution to this problem after some more investigation. PowerQuery does not support to use SQL statement as source for Teradata (possibly same for other sources as well). This is "by design" according to Microsoft. Hence the problem
is not because different PowerQuery versions as mentioned above. When designing the query in PowerQuery in Excel make sure to use the interface/navigation to create the query/select tables and NOT a SQL statement. The SQL statement as source works fine on
a client machine but not when scheduling it in Power BI in the cloud. I would like to see that the functionality within PowerQuery and Excel should be the same as in Power BI in the cloud. And at least when there is a difference it would be nice with documentation
or more descriptive errors.
//Jonas -
Is that important column order in a query with row_number function
Hi folks,
I am using Oracle 11g R2 on HP-UX machine.
I have 2 types of query with row_number and I think they are same but output of each of them are different. I changed only column order in query2.
Query 1 :
(SELECT
"LOOKUP_INPUT_SUBQUERY"."CONTRACT_SK" "CONTRACT_SK",
"LOOKUP_INPUT_SUBQUERY"."SIMCARD_SK" "SIMCARD_SK"
FROM (
SELECT row_number ()
OVER (
PARTITION BY "R_CON_SUBS_SIMCARD_LK".
"CONTRACT_SK"
ORDER BY
"R_CON_SUBS_SIMCARD_LK"."START_DATE" DESC,
"R_CON_SUBS_SIMCARD_LK"."SEQ_NUM" DESC NULLS LAST) /* EXPRESSION_3.OUTGRP1.SIRA */
"SIRA",
"R_CON_SUBS_SIMCARD_LK"."CONTRACT_SK" "CONTRACT_SK",
"R_CON_SUBS_SIMCARD_LK"."SIMCARD_SK" "SIMCARD_SK"
FROM "SRC_OZRDS"."R_CON_SUBS_SIMCARD_LK" "R_CON_SUBS_SIMCARD_LK")
"LOOKUP_INPUT_SUBQUERY"
WHERE ("LOOKUP_INPUT_SUBQUERY"."SIRA" = 1))
Output of this like that :
CONTRACT_SK SIMCARD_SK
1 1
1 3
1 4
1 5
1 6
1 11
1 12
1 14
1 15
1 16
Query 2 :
(SELECT
"LOOKUP_INPUT_SUBQUERY"."CONTRACT_SK" "CONTRACT_SK",
"LOOKUP_INPUT_SUBQUERY"."SIMCARD_SK" "SIMCARD_SK"
FROM (
SELECT
"R_CON_SUBS_SIMCARD_LK"."CONTRACT_SK" "CONTRACT_SK",
"R_CON_SUBS_SIMCARD_LK"."SIMCARD_SK" "SIMCARD_SK",
row_number ()
OVER (
PARTITION BY "R_CON_SUBS_SIMCARD_LK".
"CONTRACT_SK"
ORDER BY
"R_CON_SUBS_SIMCARD_LK"."START_DATE" DESC,
"R_CON_SUBS_SIMCARD_LK"."SEQ_NUM" DESC NULLS LAST) /* EXPRESSION_3.OUTGRP1.SIRA */
"SIRA"
FROM "SRC_OZRDS"."R_CON_SUBS_SIMCARD_LK" "R_CON_SUBS_SIMCARD_LK")
"LOOKUP_INPUT_SUBQUERY"
WHERE ("LOOKUP_INPUT_SUBQUERY"."SIRA" = 1))
Output of this like that:
2 874812
7 70097256
8 18734091
9 158024
10 815397739
13 22657919
19 83177779
20 82579529
22 5829949
23 35348926
25 3865978
I expected the second output, because there are lots of contract sk but there is one contract_sk in first query result. i did not get the point. What is the problem ?user8649469 wrote:
I changed only column order in query2.So what else do you expect? If you order, for example, by last name, fist name don't you think rows will be returned in a different order (and therefore same row will have different row number) than ordering by first name, last name?
SY. -
Problem with a query with a BLOB data type
Hi i've a problem with this query in 11g. R1
SELECT
LOGTIMESTAMP,
LOGTIMEMILLIS,
MSGID,
XMLTYPE(MESSAGEBODY, nls_charset_id('AL32UTF8')).getClobVal() as LLamada
FROM
vordel.AUDIT_MESSAGE_PAYLOAD,
vordel.AUDIT_LOG_POINTS
WHERE
AUDIT_LOG_POINTS.LOGPOINTSPK = AUDIT_MESSAGE_PAYLOAD.MP_LOGPOINTSPK AND
LOGTIMESTAMP between TO_TIMESTAMP('03-12-2011 00:00','DD-MM-YYYY HH24:MI') and TO_TIMESTAMP('03-12-2011 12:00','DD-MM-YYYY HH24:MI')
and filtertype = 'LogMessagePayloadFilter'
and filtername like 'Log Llamada%'MESSAGEBODY: data type of the Column is BLOB
throw this error after execute the query
Error:
ORA-31011: XML parsing failed
ORA-19202: Error occurred in XML processing
LPX-00200: could not convert from encoding UTF-8 to UCS2
Error at line 1
ORA-06512: at "SYS.XMLTYPE", line 283
ORA-06512: at line 1Could you check the BLOB really contains UTF-8 encoded XML?
What's your database character set?The BLOB contains UTF-8 Encoded
and the database where i am connectes have AL32UTF8 character set, but my internal instance have "AMERICAN_AMERICA.WE8ISO8859P1"
that is a problem?
How could I change the character set of the oracle local client to the character set of the remote oracle data base?
Maybe you are looking for
-
I'm managing several devices from my itunes account. If I delete a song from my iphone, will it also be deleted from other family member's devices as well as in itunes?
-
How to pass parmeter to HTMLDB_ITEM.POPUP_FROM_LOV?
Hi, I have a report form from a sql. In the HTMLDB_ITEM.POPUP_FROM_LOV area, I would like to pass a parameter of a value in the same row because I would like the POPUP_FROM_LOV to run the query based on the paratmers. Thanks.
-
can we connect oracle 10g with dev 6i or 6
-
Number format for shorthand money values
Hi, How can I format currency values to shorthand? ie how can I display 12500 as 12.5, 2700 as 2.7, 700 as 0.7 etc? I have tried using various masks but can't achieve the results I'm looking for Thanks J
-
Unable to Import a WebService model
Hi All, I am trying to import a webservice model, in my webdynpro based Adobe form. There is an RFC which is exposed as a WebService, and after generating a WSDL link for it when I try to import it in my application it gives me a message like "Invali