More rows than I cna handle
I have written a small JSP application.
It accesses an oracle DB and returns may rows.
If the no of rows is too large the the buttons used to navigate the app dissapear off the bottom of the screen.
(I know I should have considered but I ma new to developing)
Does anyone have code that uses jdbc/sql to count the rrows and links the count to a next and previous button.
the next and previous button would rerender the existing form but with the next 10 rows or the previous rows.
Many Thanks.
Peter
u can have 2 fns in the java script like this..
function getNextRows(begin,end){}
function getPrevRows(begin,end)
in the body section u can set a value for nav and activate the javascript fns using the choice like this..
String nav = (request.getParameter("nav")==null)?"next":request.getParameter("nav");
if (nav.equals("next"))
{/display}
if (nav.equals("prev"))
{/display}
the value for begin and end ex: begin =0;end=10
int begin = Integer.parseInt((request.getParameter("begin")==null)?"0":request.getParameter("begin"));
int end = Integer.parseInt((request.getParameter("end")==null)?"10":request.getParameter("end"));
display 10 recs at a time
if (begin > 10)
Previous);
if(begin < max)
Next);
hope this helps
Similar Messages
-
Control array's scrollbar shows more rows than dimensions
When an array's dimension and Number of Rows property are set to the same value, its vertical scrollbar is shown and implies there is one more row available in the array than the actual array size.
This happen only if the array is a control, not when it is an indicator. As an indicator, no scrollbar is shown at run time. In edit mode, a scrollbar is shown when as an indicator.
Is there a way to not have the scollbar appear if the number of rows equals the array size yet have it appear when the number of rows is less than the array size ?
Example vi attached.
Attachments:
Array scroll bar dimension.vi 18 KBJennifer,
In practice, the number of displayed array rows is restricted so that only enough rows are shown to remain within the bounds of the front panel. Otherwise as the array size is increased, the array will extend off the bottom of the screen. My modified version shows this restriction. But notice that as a control, the array shows one more dimension than it had been allocated to (and is shown as a disabled row). In fact, clicking on the bottom arrow of the scollbar will continue to add more rows to the array control.
Steve
Attachments:
Array scroll bar dimension.vi 22 KB -
Sender JDBC adapter SELECT / UPDATE issue - updates more rows than selected
Hi,
We have configured a Sender JDBC Adapter to poll records from an Oracle table based on a flag field and then update the flag for the selected records. When tested in DEV and QA environments (where test data comes in intermittently and not in huge volumes), itu2019s working fine.
Both SELECT and UPDATE queries written in the Sender JDBC adapter are getting properly executed and are changing the status of the flag for the selected records from Y to N once read from the database.
select * from <table> where flag = 'N'.
update <table> set flag = 'Y' where flag = 'N'.
But in the PROD environment (with records getting updated in the database every second), after XI executes the SELECT query and just before the UPDATE query is executed, new records come into the Oracle table with status flag 'N". So when the UPDATE query runs just after the SELECT query, then these unselected records also get updated to 'Y'. Thus these records never get into the resultset and hence XI and thus remain unprocessed.
So when XI does a SELECT and UPDATE on the Oracle DB table and concurrently there is an INSERT happening into the table from the other end, the JDBC sender adapter is picking up a certain number of records but updating the status of more records than it picked up.
So how does XI deal with such a common scenario without dropping records?
Thanks,
VishakThe condition being checked is the same for both SELECT and UPDATE statements.
Initially I tried setting transaction isolation levels on the database to repeatable_read and serializable but it was throwing me a java.sql.SQLException error saying that these transaction levels were not valid.
I asked for these transaction level permissions for the XI user from my DBA but the DB I am accessing provides only a view into other databases and so it's not possible. -
Query returns more row than expected
1. select * from view_name where col1 = 'value1' returns 12 rows
2. select * from (view script) where col1 = 'value1' returns 24 rows
i have a view called view_name. If i use view_name directly in the query, it returns 12 rows. But if i use the select script directly in from clause, it returns more rows. I am not able to find out why it is happening so. Any pointers will be helpful.Are you saying that the SQL for view_name and view_script are identical? Can you post them?
-
Batch input: how to fill in more rows than the ones for the screen's size.
Hi everybody,
I am working on a batch input for transaction ME38.
When, through my abap code, I am filling in the second screen's table, the system stops, telling me there is no field "43" on the screen.
Here is a summary of the batch input:
EKET-MENGE(1) 66,000
RM06E-EEIND(1) 30.06.2009
EKET-MENGE(31) 66,000
RM06E-EEIND(31) 30.06.2009
EKET-MENGE(43) 66,000
RM06E-EEIND(43) 30.06.2009
In fact, my screen has 31 lines, but 43 entries must be filled.
I can program a logic to go to the next screen when entry 31 has been filled, but how to know what is the screen limit?
Because my program will not always be started by me, and will, most probably, be started in background jobs...
Do you see a way to process that can help me?
Is there a way to not be "screen size dependant"?
Thanks in advance for your help.
Regards,
RudyHi...
This issue can be resolved by creating a recording in SHDB transaction for 43 records and press add more rows button evytime you do that..
As the rows will be added to the table during runtime you need not worry about the limit of the table entries.. and it will work fine..
Hope this helps -
ALV: multiselect selects more rows than expected
If I have table with many pages of data and do following actions:
1) leave lead select in first page as it is
2) go to next page - with ctrl select 2nd row
3) go to next page - with ctrl select 1st row
4) go back to previous page - 2nd row selected as expected
5) go to next (3rd) page - 1st and 2nd row is selected (expected only 1st row to be selected)
There is no any select event handler which could cause such behavior.
I have noticed such error on standard table, too.
What is causing this error?Oskars,
If you are using
-- one context node for this table,
-- you have set the context node's Cardinality to o...n,
-- and the table UI element that's bind to this context node's property SelectionMode is set to be 'auto' or 'multi',
then I think this is a bug in your version. I have tried on my system (SAP ECC 6.0, SAP_ABA 700), and I cannot reproduce the behavior you have describe in this thread. Check the CSN to confirm if this is a bug indeed.
regards,
Tina Yang -
Sqlldr Loading more rows than expected
We are trying to load a child table that has the referential integrity constraint with its parent and some triggers on it that is used to change the case of data. We are using the bcp to create a text file that is having the 5 records. The text file generated from BCP is a kind of CSV (~ is used for ,) file. When we are trying to load the reords using sqlldr it is loading 10 records in the database. It happens when 2 last rows are violating the integrity constraint thus the first 3 records are being duplicated and violated records are also loaded. Strange thing is that the same record that was loaded is being pushed in the bad file also. When we disable the trigger everything works fine. To get your help following is the snap of trigger and ctl files along with sample data:
CREATE OR REPLACE TRIGGER "CMD".RWC_SPCL_HANDLING_INSTRCTNS
before insert or update on RWC_SPCL_HANDLING_INSTRCTNS
REFERENCES NEW AS NEW
for each row
BEGIN
:new.ORG_ROADMARK:=upper(:new.ORG_ROADMARK);
:new.RWC_CD:=upper(:new.RWC_CD);
:new.SPCL_HANDLING_CD:=upper(:new.SPCL_HANDLING_CD);
EXCEPTION
WHEN OTHERS THEN RETURN;
END;
---------CTL FILE-------------
load data
infile 'data\rwc_spcl_handling_instrctns.txt'
into table rwc_spcl_handling_instrctns truncate
fields terminated by '~'
trailing nullcols
(ORG_ROADMARK ,
RWC_CD ,
SPCL_HANDLING_CD,
RWC_SPCL_HNDLNG_INSTRCTN_ID "rwc_sqn.nextval"
-SAMPLE DATA--------
GSWR~CARCHTADO1~SLC
GSWR~CARCHTBUO1~SLC
GSWR~CARCHTCOO1~SLC
GSWR~CARSTRCANO~SHL
GSWR~CARSTRCANO~SLC
Thanks,
AshokWhy not include the UPPER in the control file
load data
infile 'data\rwc_spcl_handling_instrctns.txt'
into table rwc_spcl_handling_instrctns truncate
fields terminated by '~'
trailing nullcols
(ORG_ROADMARK "UPPER(:ORG_ROADMARK)" ,
RWC_CD "UPPER(:RWC_CD)" ,
SPCL_HANDLING_CD "UPPER(:SPCL_HANDLING_CD)" ,
RWC_SPCL_HNDLNG_INSTRCTN_ID "rwc_sqn.nextval" -
Oracle view return more rows than its base query
O/S : AIX
Database : 11g R (11.1.0.6.0)
Query in questioon :
select A.CompanyCode, A.Code ElementCode, A.ItemTypeCode ElementItemTypeCode, A.SubcodeKey ElementSubcodeKey,
D.DecoSubcode01 SubCode01, D.DecoSubcode02 SubCode02, D.DecoSubcode03 SubCode03, D.DecoSubcode04 SubCode04,
D.DecoSubcode05 SubCode05, D.DecoSubcode06 SubCode06, D.DecoSubcode07 SubCode07, D.DecoSubcode08 SubCode08,
D.DecoSubcode09 SubCode09, D.DecoSubcode10 SubCode10, C.ItemTypeBCode, C.SubCode01B, C.SubCode02B, C.SubCode03B,
C.SubCode04B, C.SubCode05B, C.SubCode06B, C.SubCode07B, C.SubCode08B, C.SubCode09B, C.SubCode10B,
B1.ValueString SlipNo, B2.ValueString EmployeeCode, B3.ValueString SetNo, B4.ValueString SalesOrderCounterCode,
B5.ValueString SalesOrderCode, B6.ValueString Remarks, B7.ValueDecimal SumTareWeight, B8.ValueString PackingUMCode,
B9.ValueString PrimaryUMCode, B10.ValueString PlantCode, B11.ValueDecimal PackingFormCode, D.LogicalWarehouseCode FromWarehouseCode,
D.TemplateCode FromTemplateCode, D.PhysicalWarehouseCode FromPhysicalWarehouseCode, D.WHSLOCATIONWAREHOUSEZONECODE FromZoneCode,
D.WarehouseLocationCode FromLocationCode, E.LogicalWarehouseCode ToWarehouseCode, E.TemplateCode ToTemplateCode, E.PhysicalWarehouseCode ToPhysicalWarehouseCode,
E.WHSLOCATIONWAREHOUSEZONECODE ToZoneCode, E.WarehouseLocationCode ToLocationCode, D.TransactionDate, D.ItemTypeCode, E.WeightGross SumGrossWeight, E.WeightNet SumNetWeight
FROM Elements A, ADStorage B1, ADStorage B2, ADStorage B3, ADStorage B4, ADStorage B5, ADStorage B6, ADStorage B7,
ADStorage B8, ADStorage B9, ADStorage B10, ADStorage B11, GoodCutAndFentDetail C, StockTransaction D, StockTransaction E
where A.ABSUNIQUEID=B1.UNIQUEID and B1.NameEntityName='Elements' and B1.FieldName ='GoodCutAndFentSlipNo'
and A.ABSUNIQUEID=B2.UNIQUEID and B2.NameEntityName='Elements' and B2.FieldName ='GoodCutAndFentEmployee'
and A.ABSUNIQUEID=B3.UNIQUEID and B3.NameEntityName='Elements' and B3.FieldName ='GoodCutAndFentSetNo'
and A.ABSUNIQUEID=B4.UNIQUEID and B4.NameEntityName='Elements' and B4.FieldName ='GoodCutAndFentSOCounterCode'
and A.ABSUNIQUEID=B5.UNIQUEID and B5.NameEntityName='Elements' and B5.FieldName ='GoodCutAndFentSOCode'
and A.ABSUNIQUEID=B6.UNIQUEID and B6.NameEntityName='Elements' and B6.FieldName ='GoodCutAndFentRemarks'
and A.ABSUNIQUEID=B7.UNIQUEID and B7.NameEntityName='Elements' and B7.FieldName ='GoodCutAndFentTareWeight'
and A.ABSUNIQUEID=B8.UNIQUEID and B8.NameEntityName='Elements' and B8.FieldName ='GoodCutAndFentPackingUM'
and A.ABSUNIQUEID=B9.UNIQUEID and B9.NameEntityName='Elements' and B9.FieldName ='GoodCutAndFentPrimaryUM'
and A.ABSUNIQUEID=B10.UNIQUEID and B10.NameEntityName='Elements' and B10.FieldName ='GoodCutAndFentPlant'
and A.ABSUNIQUEID=B11.UNIQUEID and B11.NameEntityName='Elements' and B11.FieldName ='GoodCutAndFentPackingForm'
and A.CompanyCode=C.CompanyCode and SlipNo=C.SlipNo and C.SeqNo=1 and A.ItemTypeCode=C.ElementItemTypeCode
and A.SubcodeKey=C.ElementSubcodeKey and A.Code=C.ElementCode and A.CompanyCode=D.CompanyCode
and C.FromSTTransactionNumber=D.TransactionNumber and C.FromSTTransactionDetailNumber=D.TransactionDetailNumber
and A.CompanyCode=E.CompanyCode and C.ToSTTransactionNumber=E.TransactionNumber
and C.ToSTTransactionDetailNumber=E.TransactionDetailNumber
and SLIPNO='57575763636'
This query return 1 row.
Then i created a view on this query except condition SLIPNO='57575763636'
Now when i use the view as shown below return two rows.
select * from ViewGoodCutAndFent WHERE SLIPNO = '57575763636'
I am not able to determine where is problem area is. Thanks & RegardsIn the query SLIPNO is probably C.SlipNo
In the view SLIPNO is probably B1.ValueString -
Query in timesten taking more time than query in oracle database
Hi,
Can anyone please explain me why query in timesten taking more time
than query in oracle database.
I am mentioning in detail what are my settings and what have I done
step by step.........
1.This is the table I created in Oracle datababase
(Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
CREATE TABLE student (
id NUMBER(9) primary keY ,
first_name VARCHAR2(10),
last_name VARCHAR2(10)
2.THIS IS THE ANONYMOUS BLOCK I USE TO
POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
declare
firstname varchar2(12);
lastname varchar2(12);
catt number(9);
begin
for cntr in 1..2599999 loop
firstname:=(cntr+8)||'f';
lastname:=(cntr+2)||'l';
if cntr like '%9999' then
dbms_output.put_line(cntr);
end if;
insert into student values(cntr,firstname, lastname);
end loop;
end;
3. MY DSN IS SET THE FOLLWING WAY..
DATA STORE PATH- G:\dipesh3repo\db
LOG DIRECTORY- G:\dipesh3repo\log
PERM DATA SIZE-1000
TEMP DATA SIZE-1000
MY TIMESTEN VERSION-
C:\Documents and Settings\dipesh>ttversion
TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
Instance admin: dipesh
Instance home directory: G:\TimestTen\TT70_32
Daemon home directory: G:\TimestTen\TT70_32\srv\info
THEN I CONNECT TO THE TIMESTEN DATABASE
C:\Documents and Settings\dipesh> ttisql
command>connect "dsn=dipesh3;oraclepwd=tiger";
4. THEN I START THE AGENT
call ttCacheUidPwdSet('SCOTT','TIGER');
Command> CALL ttCacheStart();
5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
create readonly cache group rc_student autorefresh
interval 5 seconds from student
(id int not null primary key, first_name varchar2(10), last_name varchar2(10));
load cache group rc_student commit every 100 rows;
6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
I SET THE TIMING..
command>TIMING 1;
consider this query now..
Command> select * from student where first_name='2155666f';
< 2155658, 2155666f, 2155660l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
another query-
Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
2206: Table SCOTT.STUDENTS not found
Execution time (SQLPrepare) = 0.074964 seconds.
The command failed.
Command> SELECT * FROM STUDENT where first_name='2093434f';
< 2093426, 2093434f, 2093428l >
1 row found.
Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
Command>
7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
ID FIRST_NAME LAST_NAME
1498663 1498671f 1498665l
Elapsed: 00:00:00.15
Can anyone please explain me why query in timesten taking more time
that query in oracle database.
Message was edited by: Dipesh Majumdar
user542575
Message was edited by:
user542575TimesTen
Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
Version: 7.0.4.0.0 64 bit
Schema:
create usermanaged cache group factCache from
MV_US_DATAMART
ORDER_DATE DATE,
IF_SYSTEM VARCHAR2(32) NOT NULL,
GROUPING_ID TT_BIGINT,
TIME_DIM_ID TT_INTEGER NOT NULL,
BUSINESS_DIM_ID TT_INTEGER NOT NULL,
ACCOUNT_DIM_ID TT_INTEGER NOT NULL,
ORDERTYPE_DIM_ID TT_INTEGER NOT NULL,
INSTR_DIM_ID TT_INTEGER NOT NULL,
EXECUTION_DIM_ID TT_INTEGER NOT NULL,
EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
NO_ORDERS TT_BIGINT,
FILLED_QUANTITY TT_BIGINT,
CNT_FILLED_QUANTITY TT_BIGINT,
QUANTITY TT_BIGINT,
CNT_QUANTITY TT_BIGINT,
COMMISSION BINARY_FLOAT,
CNT_COMMISSION TT_BIGINT,
FILLS_NUMBER TT_BIGINT,
CNT_FILLS_NUMBER TT_BIGINT,
AGGRESSIVE_FILLS TT_BIGINT,
CNT_AGGRESSIVE_FILLS TT_BIGINT,
NOTIONAL BINARY_FLOAT,
CNT_NOTIONAL TT_BIGINT,
TOTAL_PRICE BINARY_FLOAT,
CNT_TOTAL_PRICE TT_BIGINT,
CANCELLED_ORDERS_COUNT TT_BIGINT,
CNT_CANCELLED_ORDERS_COUNT TT_BIGINT,
ROUTED_ORDERS_NO TT_BIGINT,
CNT_ROUTED_ORDERS_NO TT_BIGINT,
ROUTED_LIQUIDITY_QTY TT_BIGINT,
CNT_ROUTED_LIQUIDITY_QTY TT_BIGINT,
REMOVED_LIQUIDITY_QTY TT_BIGINT,
CNT_REMOVED_LIQUIDITY_QTY TT_BIGINT,
ADDED_LIQUIDITY_QTY TT_BIGINT,
CNT_ADDED_LIQUIDITY_QTY TT_BIGINT,
AGENT_CHARGES BINARY_FLOAT,
CNT_AGENT_CHARGES TT_BIGINT,
CLEARING_CHARGES BINARY_FLOAT,
CNT_CLEARING_CHARGES TT_BIGINT,
EXECUTION_CHARGES BINARY_FLOAT,
CNT_EXECUTION_CHARGES TT_BIGINT,
TRANSACTION_CHARGES BINARY_FLOAT,
CNT_TRANSACTION_CHARGES TT_BIGINT,
ORDER_MANAGEMENT BINARY_FLOAT,
CNT_ORDER_MANAGEMENT TT_BIGINT,
SETTLEMENT_CHARGES BINARY_FLOAT,
CNT_SETTLEMENT_CHARGES TT_BIGINT,
RECOVERED_AGENT BINARY_FLOAT,
CNT_RECOVERED_AGENT TT_BIGINT,
RECOVERED_CLEARING BINARY_FLOAT,
CNT_RECOVERED_CLEARING TT_BIGINT,
RECOVERED_EXECUTION BINARY_FLOAT,
CNT_RECOVERED_EXECUTION TT_BIGINT,
RECOVERED_TRANSACTION BINARY_FLOAT,
CNT_RECOVERED_TRANSACTION TT_BIGINT,
RECOVERED_ORD_MGT BINARY_FLOAT,
CNT_RECOVERED_ORD_MGT TT_BIGINT,
RECOVERED_SETTLEMENT BINARY_FLOAT,
CNT_RECOVERED_SETTLEMENT TT_BIGINT,
CLIENT_AGENT BINARY_FLOAT,
CNT_CLIENT_AGENT TT_BIGINT,
CLIENT_ORDER_MGT BINARY_FLOAT,
CNT_CLIENT_ORDER_MGT TT_BIGINT,
CLIENT_EXEC BINARY_FLOAT,
CNT_CLIENT_EXEC TT_BIGINT,
CLIENT_TRANS BINARY_FLOAT,
CNT_CLIENT_TRANS TT_BIGINT,
CLIENT_CLEARING BINARY_FLOAT,
CNT_CLIENT_CLEARING TT_BIGINT,
CLIENT_SETTLE BINARY_FLOAT,
CNT_CLIENT_SETTLE TT_BIGINT,
CHARGEABLE_TAXES BINARY_FLOAT,
CNT_CHARGEABLE_TAXES TT_BIGINT,
VENDOR_CHARGE BINARY_FLOAT,
CNT_VENDOR_CHARGE TT_BIGINT,
ROUTING_CHARGES BINARY_FLOAT,
CNT_ROUTING_CHARGES TT_BIGINT,
RECOVERED_ROUTING BINARY_FLOAT,
CNT_RECOVERED_ROUTING TT_BIGINT,
CLIENT_ROUTING BINARY_FLOAT,
CNT_CLIENT_ROUTING TT_BIGINT,
TICKET_CHARGES BINARY_FLOAT,
CNT_TICKET_CHARGES TT_BIGINT,
RECOVERED_TICKET_CHARGES BINARY_FLOAT,
CNT_RECOVERED_TICKET_CHARGES TT_BIGINT,
PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
READONLY);
No of rows: 2228558
Config:
< CkptFrequency, 600 >
< CkptLogVolume, 0 >
< CkptRate, 0 >
< ConnectionCharacterSet, US7ASCII >
< ConnectionName, tt_us_dma >
< Connections, 64 >
< DataBaseCharacterSet, AL32UTF8 >
< DataStore, e:\andrew\datacache\usDMA >
< DurableCommits, 0 >
< GroupRestrict, <NULL> >
< LockLevel, 0 >
< LockWait, 10 >
< LogBuffSize, 65536 >
< LogDir, e:\andrew\datacache\ >
< LogFileSize, 64 >
< LogFlushMethod, 1 >
< LogPurge, 0 >
< Logging, 1 >
< MemoryLock, 0 >
< NLS_LENGTH_SEMANTICS, BYTE >
< NLS_NCHAR_CONV_EXCP, 0 >
< NLS_SORT, BINARY >
< OracleID, NYCATP1 >
< PassThrough, 0 >
< PermSize, 4000 >
< PermWarnThreshold, 90 >
< PrivateCommands, 0 >
< Preallocate, 0 >
< QueryThreshold, 0 >
< RACCallback, 0 >
< SQLQueryTimeout, 0 >
< TempSize, 514 >
< TempWarnThreshold, 90 >
< Temporary, 1 >
< TransparentLoad, 0 >
< TypeMode, 0 >
< UID, OS_OWNER >
ORACLE:
Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
Schema:
CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
TABLESPACE TS_OS
PARTITION BY RANGE (ORDER_DATE)
PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE TS_OS,
PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
LOGGING
NOCOMPRESS
TABLESPACE TS_OS
NOCACHE
NOCOMPRESS
NOPARALLEL
BUILD DEFERRED
USING INDEX
TABLESPACE TS_OS_INDEX
REFRESH FAST ON DEMAND
WITH PRIMARY KEY
ENABLE QUERY REWRITE
AS
SELECT order_date, if_system,
GROUPING_ID (order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id
) GROUPING_ID,
/* ============ DIMENSIONS ============ */
time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
instr_dim_id, execution_dim_id, exec_exchange_dim_id,
/* ============ MEASURES ============ */
-- o.FX_RATE /* FX_RATE */,
COUNT (*) no_orders,
-- SUM(NO_ORDERS) NO_ORDERS,
-- COUNT(NO_ORDERS) CNT_NO_ORDERS,
SUM (filled_quantity) filled_quantity,
COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
COUNT (quantity) cnt_quantity, SUM (commission) commission,
COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
COUNT (fills_number) cnt_fills_number,
SUM (aggressive_fills) aggressive_fills,
COUNT (aggressive_fills) cnt_aggressive_fills,
SUM (fx_rate * filled_quantity * average_price) notional,
COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
SUM (fx_rate * fills_number * average_price) total_price,
COUNT (fx_rate * fills_number * average_price) cnt_total_price,
SUM (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END) cancelled_orders_count,
COUNT (CASE
WHEN order_status = 'C'
THEN 1
ELSE 0
END
) cnt_cancelled_orders_count,
-- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
-- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
-- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
SUM (routed_orders_no) routed_orders_no,
COUNT (routed_orders_no) cnt_routed_orders_no,
SUM (routed_liquidity_qty) routed_liquidity_qty,
COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
SUM (removed_liquidity_qty) removed_liquidity_qty,
COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
SUM (added_liquidity_qty) added_liquidity_qty,
COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
SUM (agent_charges) agent_charges,
COUNT (agent_charges) cnt_agent_charges,
SUM (clearing_charges) clearing_charges,
COUNT (clearing_charges) cnt_clearing_charges,
SUM (execution_charges) execution_charges,
COUNT (execution_charges) cnt_execution_charges,
SUM (transaction_charges) transaction_charges,
COUNT (transaction_charges) cnt_transaction_charges,
SUM (order_management) order_management,
COUNT (order_management) cnt_order_management,
SUM (settlement_charges) settlement_charges,
COUNT (settlement_charges) cnt_settlement_charges,
SUM (recovered_agent) recovered_agent,
COUNT (recovered_agent) cnt_recovered_agent,
SUM (recovered_clearing) recovered_clearing,
COUNT (recovered_clearing) cnt_recovered_clearing,
SUM (recovered_execution) recovered_execution,
COUNT (recovered_execution) cnt_recovered_execution,
SUM (recovered_transaction) recovered_transaction,
COUNT (recovered_transaction) cnt_recovered_transaction,
SUM (recovered_ord_mgt) recovered_ord_mgt,
COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
SUM (recovered_settlement) recovered_settlement,
COUNT (recovered_settlement) cnt_recovered_settlement,
SUM (client_agent) client_agent,
COUNT (client_agent) cnt_client_agent,
SUM (client_order_mgt) client_order_mgt,
COUNT (client_order_mgt) cnt_client_order_mgt,
SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
SUM (client_trans) client_trans,
COUNT (client_trans) cnt_client_trans,
SUM (client_clearing) client_clearing,
COUNT (client_clearing) cnt_client_clearing,
SUM (client_settle) client_settle,
COUNT (client_settle) cnt_client_settle,
SUM (chargeable_taxes) chargeable_taxes,
COUNT (chargeable_taxes) cnt_chargeable_taxes,
SUM (vendor_charge) vendor_charge,
COUNT (vendor_charge) cnt_vendor_charge,
SUM (routing_charges) routing_charges,
COUNT (routing_charges) cnt_routing_charges,
SUM (recovered_routing) recovered_routing,
COUNT (recovered_routing) cnt_recovered_routing,
SUM (client_routing) client_routing,
COUNT (client_routing) cnt_client_routing,
SUM (ticket_charges) ticket_charges,
COUNT (ticket_charges) cnt_ticket_charges,
SUM (recovered_ticket_charges) recovered_ticket_charges,
COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
FROM us_datamart_raw
GROUP BY order_date,
if_system,
business_dim_id,
time_dim_id,
account_dim_id,
ordertype_dim_id,
instr_dim_id,
execution_dim_id,
exec_exchange_dim_id;
-- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
-- by Oracle with the associated materialized view.
CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
NOLOGGING
NOPARALLEL
COMPRESS 7;
No of rows: 2228558
The query (taken Mondrian) I run against each of them is:
select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
--, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
--, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
--, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
--, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
--, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
--, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
--, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
--, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
--, sum("MV_US_DATAMART"."COMMISSION") as "m9"
--, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
--, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
--,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
--,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
--, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
--, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
--, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
--, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
--,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
--, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
No Columns ORACLE TimesTen
1 1.05 0.94
2 1.07 1.47
3 2.04 1.8
4 2.06 2.08
5 2.09 2.4
6 3.01 2.67
7 4.02 3.06
8 4.03 3.37
9 4.04 3.62
10 4.06 4.02
11 4.08 4.31
12 4.09 4.61
13 5.01 4.76
14 5.02 5.06
15 5.04 5.25
16 5.05 5.48
17 5.08 5.84
18 6 6.21
19 6.02 6.34
20 6.04 6.75 -
I synced my iPhone with itunes and iMac and got many more contacts than I wanted on my iPhone. How can I delete quite a few of them at one time as opposed to deleting one at a time?
iTunes does not handle the import of photos/videos from the iPhone's Camera Roll.
If this is not the computer you sync your iPhone with, you should have been provided a warning message that your iPhone is associated with an iTunes library on another computer and when trying to transfer any iTunes content from a different computer, a warning message is provided indicating that all iTunes content on the iPhone will be erased first.
When connecting an iPhone to iTunes on another computer, none of the options under the various tabs for the iPhone sync preferences with iTunes are selected automatically. This means you had to select Sync Contacts under the Info tab for your iPhone sync preferences with iTunes on your wife's computer.
Assuming you are syncing contacts with a supported address book app on your computer, connect your iPhone to Tunes on your computer and without syncing, under the Info tab for your iPhone sync preferences below the Advanced section, select Contacts for replace info on this iPhone followed by a sync.
For importing photos/videos from the Camera Roll which can be done with a computer that is not used for syncing the iPhone since syncing with iTunes is not involved.
http://support.apple.com/kb/HT4083 -
ColdFusion 11: custom serialisers. More questions than answers
G'day:
I am reposting this from my blog ("ColdFusion 11: custom serialisers. More questions than answers") at the suggestion of Adobe support:
@dacCfml @ColdFusion Can you post your queries at http://t.co/8UF4uCajTC for all cfclient and mobile queries.— Anit Kumar Panda (@anitkumar85) April 29, 2014
This particular question is not regarding <cfclient>, hence posting it on the regular forum, not on the mobile-specific one as Anit suggested. I have edited this in places to remove language that will be deemed inappropriate by the censors here. Changes I have made are in [square brackets]. The forums software here has broken some of the styling, but so be it.
G'day:
I've been wanting to write an article about the new custom serialiser one can have in ColdFusion 11, but having looked at it I have more questions than I have answers, so I have put it off. But, equally, I have no place to ask the questions, so I'm stymied. So I figured I'd write an article covering my initial questions. Maybe someone can answer then.
ColdFusion 11 has added the notion of a custom serialiser a website can have (docs: "Support for pluggable serializer and deserializer"). The idea is that whilst Adobe can dictate the serialisation rules for its own data types, it cannot sensibly infer how a CFC instance might get serialised: as each CFC represents a different data "schema", there is no "one size fits all" approach to handling it. So this is where the custom serialiser comes in. Kind of. If it wasn't a bit rubbish. Here's my exploration thusfar.
One can specify a custom serialiser by adding a setting to Application.cfc:
component { this.name = "serialiser01"; this.customSerializer="Serialiser"; }
In this case the value - Serialiser - is the name of a CFC, eg:
// Serialiser.cfccomponent { public function canSerialize(){ logArgs(args=arguments, from=getFunctionCalledName()); return true; } public function canDeserialize(){ logArgs(args=arguments, from=getFunctionCalledName()); return true; } public function serialize(){ logArgs(args=arguments, from=getFunctionCalledName()); return "SERIALISED"; } public function deserialize(){ logArgs(args=arguments, from=getFunctionCalledName()); return "DESERIALISED"; } private function logArgs(required struct args, required string from){ var dumpFile = getDirectoryFromPath(getCurrentTemplatePath()) & "dump_#from#.html"; if (fileExists(dumpFile)){ fileDelete(dumpFile); } writeDump(var=args, label=from, output=dumpFile, format="html"); } }
This CFC needs to implement four methods:
canSerialize() - indicates whether something can be serialised by the serialiser;
canDeserialize() - indicates whether something can be deserialised by the serialiser;
serialize() - the function used to serialise something
deserialize() - the function used to deserialise something
I'm being purposely vague on those functions for a reason. I'll get to that.
The first [issue] in the implementation here is that for the custom serialisation to work, all four of those methods must be implemented in the serisalisation CFC. So common sense would dictate that a way to enforce that would be to require the CFC to implement an interface. That's what interfaces are for. Now I know people will argue the merit of having interfaces in CFML, but I don't really give a [monkey's] about that: CFML has interfaces, and this is what they're for. So when one specifies the serialiser in Application.cfc and it doesn't fulfil the interface requirement, it should error. Right then. When one specifies the inappropriate tool for the job. What instead happens is if the functions are omitted, one will get erratic behaviour in the application, through to outright errors when ColdFusion goes to call the functions and cannot find it. EG: if I have canSerialize() but no serialize() method, CF will error when it comes to serialise something:
JSON serialization failure: Unable to serialize to JSON.
Reason : The method serialize was not found in component C:/wwwroot/scribble/shared/git/blogExamples/coldfusion/CF11/customerserialiser/Serialiser .cfc.
The error occurred inC:/wwwroot/scribble/shared/git/blogExamples/coldfusion/CF11/customerserialiser/testBasic.c fm: line 4
2 : o = new Basic();
3 :
4 : serialised = serializeJson(o);5 : writeDump([serialised]);
6 :
Note that the error comes when I go to serialise something, not when ColdFusion is told about the serialiser in the first place. This is just lazy/thoughtless implementation on the part of Adobe. It invites bugs, and is just sloppy.
The second [issue] follows immediately on from this.
Given my sample serialiser above, I then run this test code to examine some stuff:
o = new Basic(); serialised = serializeJson(o); writeDump([serialised]); deserialised = deserializeJson(serialised); writeDump([deserialised]);
So all I'm doing is using (de)serializeJson() as a baseline to see how the functions work. here's Basic.cfc, btw:
component { }
And the test output:
array
1
SERIALISED
array
1
DESERIALISED
This is as one would expect. OK, so that "works". But now... you'll've noted I am logging the arguments each of the serialisation methods receives, as I got.
Here's the arguments passed to canSerialize():
canSerialize - struct
1
XML
My reaction to that is: "[WTH]?" Why is canSerialize() being passed the string "XML" when I'm trying to serialise an object of type Basic.cfc?
Here's the docs for canSerialize() (from the page I linked to earlier):
CanSerialize - Returns a boolean value and takes the "Accept Type" of the request as the argument. You can return true if you want the customserialzer to serialize the data to the passed argument type.
Again, back to "[WTH]?" What's the "Accept type" of the request? And what the hell has the request got to do with a call to serializeJson()? You might think that "Accept type" references some HTTP header or something, but there is no "Accept type" header in the HTTP spec (that I can find: "Hypertext Transfer Protocol -- HTTP/1.1: 14 Header Field Definitions"). There's an "Accept" header (in this case: "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"), and other ones like "Accept-Encoding", "Accept-Language"... but none of which contain a value of "XML". Even if there was... how would it be relevant to the question as to whether a Basic.cfc instance can be serialised? Raised as bug: 3750730.
serialize() gets more sensible arguments:
serialize - struct
1
https://www.blogger.com/nullserialize - component scribble.shared.git.blogExamples.coldfusion.CF11.customerserialiser.Basic
2
JSON
So the first is the object to serialise (which surely should be part of the question canSerialize() is supposed to ask, and the format to serialise to. Cool.
canDeserialize() is passed this:
canDeserialize - struct
1
JSON
I guess it's because it's being called from deserializeJson(), so it's legit to expect the input value is indeed JSON. Fair enough. (Note: I'm not actually passing it JSON, but that's beside the point here).
And deserialize() is passed this:
deserialize - struct
1
SERIALISED
2
JSON
3
[empty string]
The first argument is the value to work on, and the second is the type of deserialisation to do. I have no idea what the third argument is for, and it's not mentioned directly or indirectly on that docs page. So dunno what the story is there.
The next issue isn't a code-oriented one, but an implementation one: how the hell are we expected to work with this?
The only way to work here is for each function to have a long array of IF/ELSEIF statements which somehow identify each object type that is serialisable, and then return true from canSerialise(), or in the case of serialize(), go ahead and do the serialisation. So this means this one CFC needs to know about everything which can be serialised in the entire application. Talk about a failure in "separation of concerns".
You know the best way of determining if an object can be seriaslised? Ask it! Don't rely on something else needing to know. This can be achieved very easily in one of two ways:
Check to see if the object implements a "Serializable" interface, which requires a serialize() method to exist.
Or simply take the duck-typing approach: if a CFC implements a serialize() method: it can be serialised. By calling that method. Job done.
Either approach would work fine, keeps things nicely encapsulated, and I see merits in both. And either make far more sense than Adobe's approach. Which is like something from the "OO Failures Special Needs" class.
Deserialisation is trickier. Because it relies on somehow working out how to deserialise() an object. I'm not sure of the best approach here, but - again - how to deserialise something should be as close to the thing needing deserialisation as possible. IE: something in the serialised data itself which can be used to bootstrap the process.
This could simply be a matter of specifying a CFC type at a known place in the serialised data. EG: Adobe stipulates that if the serialised data is JSON, and at the top level of the JSON is a key eg: type, and the value is an extant CFC... use that CFC's deserialize() method. Or it could look for an object which contains a type and a method, or whatever. But Adobe can specify a contract there.
The only place I see a centralised CFC being relevant here is for a mechanism for handling serialised data that is neither a ColdFusion internal type, nor identifiable as above. In this case, perhaps they could provide a mechanism for a serialisation router, which basically has a bunch of routes (if/elseifs if need be) which contains logic as to how to work out how to deserialise the data. But it should not be the actual deserialiser, it should simply have the mechanism to find out how to do it. This is actually pretty much the same in operation as the deserialize() approach in the current implementation, but it doesn't need the canDeserialize() method (it can return false at the end of the routing), and it doesn't need to know about serialising. And also it's not the main mechanism to do the deserialisation, it's just the fall back if the prescribed approach hasn't been used.
TBH, this still sounds a bit jerry-built, and I'm open for better suggestions. This is probably a well-trod subject in other languages, so it might be worth looking at how the likes of Groovy, Ruby or even PHP (eek!) achieve this.
There's still another issue with the current approach. And this demonstrates that the Adobe guys don't actually work with either CFML applications or even modern websites. This approach only works for a single, stand-alone website (like how we might have done in 2001). What if I'm not in the business of building websites, but I build applications such as FW/1 or ColdBox or the like? Or any sort of "helper" application. They cannot use the current Adobe implementation of the customserializer. Why? Because the serialisation code needs to be in a website-specific CFC. There's no way for Luis to implement a custom serialiser in ColdBox (for example), and then have it work for someone using ColdBox. Because it relies on either editing Application.cfc to specify a different CFC, or editing the existing customSerializer CFC. Neither of which are very good solutions. This should have been immediately apparent to the Adobe engineer(s) implementing this stuff had they actually had any experience with modern web applications (which generally aren't just a single monolithic site, but an aggregation of various other sub applications). Equally, I know it's not a case of having thought about this and [I'm just missing something], because when I asked them the other day, at first they didn't even get what I was asking, but when I clarified were just like "oh yeah... um... err... yeah, you can't do that. We'll... have to... ah yeah". This has been raised as bug 3750731.
So I declare the intent here valid, but the implementation to be more alpha- / pre-release- quality, not release-ready.
Still: it could be easily deprecated and rework fairly easily. I've raised this as bug 3750732.
Or am I missing something?
AdamYes, you can easily add additional questions to the Lookup.WebClient.Questions Lookup to allow some additional choices. We have added quite a few additional choices, we have noticed that removing them once people have selected them causes some errors.
You can also customize the required number of questions to select when each user sets them up as well as the number required to be correct to reset the password, these options are in the System Configuration settings.
If you need multi-language versions of the questions, you will also need to modify the appropriate language resource file in the xlWebApp.war file to provide the necessary translations for the values entered into the Lookup. -
OR is taking much more time than UNION
hi gems..
i have written a query using UNION clause and it took 12 seconds to give result.
then i wrote the same query using OR operator and then it took 78 seconds to give the resultset.
The tables which are referred by this qurey have no indexes.
the cost plans for the query with OR is also much more lesser than that with UNION.
please suggest why OR is taking more time.
thanks in advanceHere's a ridiculously simple example. (these tables don't even have any rows in them)
If you had separate indexes on col1 and col2, the optimizer might use indexes in the union but not in the or statement:
Which is faster will depend on the usual list of things.
Of course, the union also requires a sort operation.
SQL> create table table1
2 (col1 number, col2 number, col3 number, col4 number);
Table created.
SQL> create index t1_idx1 on table1(col1);
Index created.
SQL> create index t1_idx2 on table1(col2);
Index created.
SQL> explain plan for
2 select col1, col2, col3, col4
3 from table1
4 where col1> = 123
5 or col2 <= 456;
Explained.
SQL> @xp
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 52 | 2 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| TABLE1 | 1 | 52 | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("COL1">=123 OR "COL2"<=456)
SQL> explain plan for
2 select col1, col2, col3, col4
3 from table1
4 where col1 >= 123
5 union
6 select col1, col2, col3, col4
7 from table1
8 where col2 <= 456;
Explained.
SQL> @xp
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 104 | 4 (75)| 00:00:01 |
| 1 | SORT UNIQUE | | 2 | 104 | 4 (75)| 00:00:01 |
| 2 | UNION-ALL | | | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| TABLE1 | 1 | 52 | 1 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | T1_IDX1 | 1 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| TABLE1 | 1 | 52 | 1 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | T1_IDX2 | 1 | | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
4 - access("COL1">=123)
6 - access("COL2"<=456) -
Count (*) for select stmt take more time than execute a that sql stmt
HI
count (*) for select stmt take more time than execute a that sql stmt
executing particular select stmt take 2.47 mins but select stmt is using the /*+parallel*/ (sql optimer) in that sql command for faster execute .
but if i tried to find out total number of rows in that query it takes more time ..
almost 2.30 hrs still running to find count(col)
please help me to get count of row faster.
thanks in advance...797525 wrote:
HI
count (*) for select stmt take more time than execute a that sql stmt
executing particular select stmt take 2.47 mins but select stmt is using the /*+parallel*/ (sql optimer) in that sql command for faster execute .
but if i tried to find out total number of rows in that query it takes more time ..
almost 2.30 hrs still running to find count(col)
please help me to get count of row faster.
thanks in advance...That may be because your client is displaying only the first few records when you are running the "SELECT *". But when you run "COUNT(*)", the whole records has to be counted.
As already mentined please read teh FAQ to post tuning questions. -
Delete DML statment tales more time than Update or Insert.
i want to know whether a delete statement takes more time than an update or insert DML command. Please help in solving the doubt.
Regards.I agree: the amount of ROLLBACK (called UNDO) and ROLLFORWARD (called REDO) information written by the various statement has a crucial impact on the speed.
I did some simple benchmarks for INSERT, UPDATE and DELETE using a 1 million row simple table. As an alternative to the long UPDATEs and DELETEs, I tested also the usual workarounds (which have only partial applicability).
Here are the conclusions (quite important in my opinion, but not to be taken as universal truth):
1. Duration of DML statements for 1 million rows operations (with the size of redo generated):
--- INSERT: 3.5 sec (redo: 3.8 MB)
--- UPDATE: 24.8 sec (redo: 240 MB)
--- DELETE: 26.1 sec (redo: 228 MB)
2. Replacement of DELETE with TRUNCATE
--- DELETE: 26.1 sec (rollback: 228 MB)
--- TRUNCATE: 0.1 sec (rollback: 0.1 MB)
3. Replacement of UPDATE with CREATE new TABLE AS SELECT (followed by DROP ols and RENAME new AS old)
--- UPDATE: 24.8 sec (redo_size: 240 MB)
--- replacement: 3.5 sec (rollback: 0.3 MB)
-- * Preparation *
CREATE TABLE ao AS
SELECT rownum AS id,
'N' || rownum AS name
FROM all_objects, all_objects
WHERE rownum <= 1000000;
CREATE OR REPLACE PROCEDURE print_my_stat(p_name IN v$statname.NAME%TYPE) IS
v_value v$mystat.VALUE%TYPE;
BEGIN
SELECT b.VALUE
INTO v_value
FROM v$statname a,
v$mystat b
WHERE a.statistic# = b.statistic# AND lower(a.NAME) LIKE lower(p_name);
dbms_output.put_line('*' || p_name || ': ' || v_value);
END print_my_stat;
-- * Test 1: Comparison of INSERT, UPDATE and DELETE *
CREATE TABLE ao1 AS
SELECT * FROM ao WHERE 1 = 2;
exec print_my_stat('redo_size')
*redo_size= 277,220,544
INSERT INTO ao1 SELECT * FROM ao;
1000000 rows inserted
executed in 3.465 seconds
exec print_my_stat('redo_size')
*redo_size= 301,058,852
commit;
UPDATE ao1 SET name = 'M' || SUBSTR(name, 2);
1000000 rows updated
executed in 24.786 seconds
exec print_my_stat('redo_size')
*redo_size= 545,996,280
commit;
DELETE FROM ao1;
1000000 rows deleted
executed in 26.128 seconds
exec print_my_stat('redo_size')
*redo_size= 783,655,196
commit;
-- * Test 2: Replace DELETE with TRUNCATE *
DROP TABLE ao1;
CREATE TABLE ao1 AS
SELECT * FROM ao;
exec print_my_stat('redo_size')
*redo_size= 807,554,512
TRUNCATE TABLE ao1;
executed in 0.08 seconds
exec print_my_stat('redo_size')
*redo_size= 807,616,528
-- * Test 3: Replace UPDATE with CREATE TABLE AS SELECT *
INSERT INTO ao1 SELECT * FROM ao;
commit;
exec print_my_stat('redo_size')
*redo_size= 831,525,556
CREATE TABLE ao2 AS
SELECT id, 'M' || SUBSTR(name, 2) name FROM ao1;
executed in 3.125 seconds
DROP TABLE ao1;
executed in 0.32 seconds
RENAME ao2 TO ao1;
executed in 0.01 seconds
exec print_my_stat('redo_size')
*redo_size= 831,797,608 -
Outer join with first of zero or more rows
Hi!
I'm trying to make a join where one part is the first row of zero or more rows. I seem to be getting much poorer performance than I ought to be able to. My tables Node and RelayGroup have 6000 and 3000 rows respectively, while the CommandHist table has 10 million rows. Here are two queries that work, but are fairly slow:
-- 37s, 28s, 27s
SELECT DISTINCT n.NodeName,(SELECT Reason from commandhist where CommandHistId = (SELECT MAX(CommandHistID) FROM CommandHist WHERE NodeID = n.NodeID)) AS reason
FROM RelayGroup rg
JOIN Node n ON rg.NodeID = n.NodeID
ORDER BY n.NodeName ASC ;
-- 21s, 15s, 13s
SELECT DISTINCT n.NodeName,(MAX(ch.Reason) KEEP (DENSE_RANK FIRST ORDER BY ch.CommandHistID DESC)) AS reason
FROM RelayGroup rg
JOIN Node n ON rg.NodeID = n.NodeID
LEFT OUTER JOIN CommandHist ch ON ch.NodeID = rg.NodeID
GROUP BY n.NodeName ORDER BY n.NodeName ASC ;
I suspect that this could be made faster by using ROWNUM to get the first row from CommandHist -- what I'd like to do is
SELECT n.NodeName, ch.CommandHistID
FROM Node n
JOIN RelayGroup rg ON n.NodeID = rg.NodeID
LEFT OUTER JOIN (SELECT * FROM
(SELECT CommandHistId, Reason, NodeID FROM CommandHist WHERE NodeID = n.NodeID ORDER BY CommandHistID)
WHERE ROWNUM = 1) ch ON ch.NodeID = n.NodeID
ORDER BY n.NodeName ASC;
except n.NodeID cannot be used within the inner select. Is there a way to 'inject' the value of n.NodeID so that it can be seen from the inner select?
Thanks in advance,
-LarsYour requirement isn't exactly clear as you haven't provided any table information or example data for us to understand what you are talking about.
If it's a performance issue take a look at these threads..
[How to post a SQL tuning request|http://forums.oracle.com/forums/thread.jspa?threadID=863295&tstart=0]
[When your query takes too long...|http://forums.oracle.com/forums/thread.jspa?messageID=1812597#1812597]
Maybe you are looking for
-
How can I refer others to the search page in the iTunes bookstore that shows my books?
-
My iphone 4s' screen is broken. I haven't dropped it or anything, or else it must've shown through its physical appearance. It just went into hazy white and green and now i cannot use it. Can i have a replacement for it? Thanks
-
Oracle Service Bus Installation
Greetings. We are migrating an Oracle Service Bus infrastructure to a new Hardware Infrastructure in a High Availability environment with two nodes, in the current environment there is one node. There isn't a hardware load balancer in the new infrast
-
Payment through paypal on Blackberry App World
Hi All, Im using my playbook to buy an apps on the Blackberry App World but does not succeed. I have the message Payment entcountered with Paypal authorization. I have used Paypal for other online purchases and its fine. I'd called the Paypal hel ce
-
I installed Macromedia Flash MX 2004 yesterday,but I get an error,unfortunately,it's one of those "Program has encountered a problem and need to be cosed." errors All I can get is the error signature AppName: flash.exe AppVer: 7.0.0.470 ModName: unkn