SQL - Need Tunning tips for group by [LATEST EXECUTION PLAN IS ATTACHED]
Hi All Experts,
My SQL is taking so much time to execute. If I remove the group by clause it is running within a minute but as soon as I am putting sum() and group by clause it is taking ages to run the sql. Number of records are wihout group by clause is almost 85 Lachs (8 Million). Is hugh dataset is killing this? Is there any way to tune the data on Group by clause. Below is my Select hints and execution plan. Please help.
SQL
SELECT /*+ CURSOR_SHARING_EXACT gather_plan_statistics all_rows no_index(atm) no_expand
leading (src cpty atm)
index(bk WBKS_PK) index(src WSRC_UK1) index(acct WACC_UK1)
use_nl(acct src ccy prd cpty grate sb) */
EXECUTION PLAN
PLAN_TABLE_OUTPUT
SQL_ID 1y5pdhnb9tks5, child number 0
SELECT /*+ CURSOR_SHARING_EXACT gather_plan_statistics all_rows no_index(atm) no_expand leading (src cpty atm) index(bk
WBKS_PK) index(src WSRC_UK1) index(acct WACC_UK1) use_nl(acct src ccy prd cpty grate sb) */ atm.business_date,
atm.entity legal_entity, TO_NUMBER (atm.set_of_books) setofbooksid, atm.source_system_id sourcesystemid,
ccy.ccy_currency_code ccy_currency_code, acct.acct_account_code, 0 gl_bal, SUM (atm.amount)
atm_bal, 0 gbp_equ, ROUND (SUM (atm.amount * grate.rate), 4) AS
atm_equ, prd.prd_product_code, glacct.parentreportingclassification parentreportingclassification,
cpty_counterparty_code FROM wh_sources_d src,
Plan hash value: 4193892926
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 1 | HASH GROUP BY | | 1 | 1 | 471 |00:31:38.26 | 904M| 76703 | 649K| 649K| 1149K (0)|
| 2 | NESTED LOOPS | | 1 | 1 | 8362K|00:47:06.06 | 904M| 76703 | | | |
| 3 | NESTED LOOPS | | 1 | 1 | 10M|00:28:48.84 | 870M| 17085 | | | |
| 4 | NESTED LOOPS | | 1 | 1 | 10M|00:27:56.05 | 849M| 17084 | | | |
| 5 | NESTED LOOPS | | 1 | 8 | 18M|00:14:10.93 | 246M| 17084 | | | |
| 6 | NESTED LOOPS | | 1 | 22 | 18M|00:11:58.96 | 189M| 17084 | | | |
| 7 | NESTED LOOPS | | 1 | 22 | 18M|00:10:24.69 | 152M| 17084 | | | |
| 8 | NESTED LOOPS | | 1 | 1337 | 18M|00:06:00.74 | 95M| 17083 | | | |
| 9 | NESTED LOOPS | | 1 | 1337 | 18M|00:02:52.20 | 38M| 17073 | | | |
|* 10 | HASH JOIN | | 1 | 185K| 18M|00:03:46.38 | 1177K| 17073 | 939K| 939K| 575K (0)|
| 11 | NESTED LOOPS | | 1 | 3 | 3 |00:00:00.01 | 11 | 0 | | | |
| 12 | TABLE ACCESS BY INDEX ROWID | WH_SOURCES_D | 1 | 3 | 3 |00:00:00.01 | 3 | 0 | | | |
|* 13 | INDEX RANGE SCAN | WSRC_UK1 | 1 | 3 | 3 |00:00:00.01 | 2 | 0 | | | |
|* 14 | TABLE ACCESS BY INDEX ROWID | WH_COUNTERPARTIES_D | 3 | 1 | 3 |00:00:00.01 | 8 | 0 | | | |
|* 15 | INDEX UNIQUE SCAN | WCPY_U1 | 3 | 1 | 3 |00:00:00.01 | 5 | 0 | | | |
| 16 | PARTITION RANGE SINGLE | | 1 | 91M| 91M|00:00:00.08 | 1177K| 17073 | | | |
|* 17 | TABLE ACCESS FULL | WH_ATM_BALANCES_F | 1 | 91M| 91M|00:00:00.04 | 1177K| 17073 | | | |
|* 18 | TABLE ACCESS BY INDEX ROWID | WH_PRODUCTS_D | 18M| 1 | 18M|00:01:43.88 | 37M| 0 | | | |
|* 19 | INDEX UNIQUE SCAN | WPRD_UK1 | 18M| 1 | 18M|00:00:52.13 | 18M| 0 | | | |
|* 20 | TABLE ACCESS BY GLOBAL INDEX ROWID| WH_BOOKS_D | 18M| 1 | 18M|00:02:53.01 | 56M| 10 | | | |
|* 21 | INDEX UNIQUE SCAN | WBKS_PK | 18M| 1 | 18M|00:01:08.32 | 37M| 10 | | | |
|* 22 | TABLE ACCESS BY INDEX ROWID | T_SDM_SOURCEBOOK | 18M| 1 | 18M|00:03:43.66 | 56M| 1 | | | |
|* 23 | INDEX RANGE SCAN | TSSB_N5 | 18M| 2 | 23M|00:01:11.50 | 18M| 1 | | | |
|* 24 | TABLE ACCESS BY INDEX ROWID | WH_CURRENCIES_D | 18M| 1 | 18M|00:01:51.21 | 37M| 0 | | | |
|* 25 | INDEX UNIQUE SCAN | WCUR_PK | 18M| 1 | 18M|00:00:49.26 | 18M| 0 | | | |
| 26 | TABLE ACCESS BY INDEX ROWID | WH_GL_DAILY_RATES_F | 18M| 1 | 18M|00:01:55.84 | 56M| 0 | | | |
|* 27 | INDEX UNIQUE SCAN | WGDR_U2 | 18M| 1 | 18M|00:01:10.89 | 37M| 0 | | | |
| 28 | INLIST ITERATOR | | 18M| | 10M|00:22:40.03 | 603M| 0 | | | |
|* 29 | TABLE ACCESS BY INDEX ROWID | WH_ACCOUNTS_D | 150M| 1 | 10M|00:20:19.05 | 603M| 0 | | | |
|* 30 | INDEX UNIQUE SCAN | WACC_UK1 | 150M| 5 | 150M|00:10:16.81 | 452M| 0 | | | |
| 31 | TABLE ACCESS BY INDEX ROWID | T_SDM_GLACCOUNT | 10M| 1 | 10M|00:00:50.64 | 21M| 1 | | | |
|* 32 | INDEX UNIQUE SCAN | TSG_PK | 10M| 1 | 10M|00:00:26.17 | 10M| 0 | | | |
|* 33 | TABLE ACCESS BY INDEX ROWID | WH_COMMON_TRADES_D | 10M| 1 | 8362K|00:18:52.56 | 33M| 59618 | | | |
|* 34 | INDEX UNIQUE SCAN | WCTD_PK | 10M| 1 | 10M|00:03:06.56 | 21M| 5391 | | | |
Edited by: user535789 on Mar 17, 2011 9:45 PM
Edited by: user535789 on Mar 20, 2011 8:33 PM
user535789 wrote:
Hi All Experts,
My SQL is taking so much time to execute. If I remove the group by clause it is running within a minute but as soon as I am putting sum() and group by clause it is taking ages to run the sql. Number of records are wihout group by clause is almost 85 Lachs (*8 Million*). Is hugh dataset is killing this? Is there any way to tune the data on Group by clause. Below is my Select hints and execution plan. Please help.I doubt that your 8 million records are shown within minutes.
I guess that the output started after a few minutes. But this does not mean that the full resultset is there. It just means the database is able to return to you the first few records after a few minutes.
Once you add a group by (or an order by) then this requires that all the rows need to be fetched before the database can start showing them to you.
But maybe you could run some tests to compare the full output. I find it useful to SET AUTOTRACE TRACEONLY for such a purpose (in sql plus). This avoids to print the selection on the screen.
Similar Messages
-
Need some tips for Database Developer Role.
Dear All,
Next week, I'm going to face an Interview for Database Developer role.
For this, I need some more & useful information on these recommended points.
1. Involve in a designing part of Data warehouse from
scratch
2. Create complex analytic queries on large data sets.
3. Analyse trends in key metrics.
4. Monitoring and optimizing the performance of the database.
Please help get the vital information on these points.
All help will be highly appreciated.
Thanks,1. Involve in a designing part of Data warehouse from
scratch
Design Database...
This needs lot of information about business and its fonctionnalités, and so many.. Tables, relationships etc...
http://technet.microsoft.com/en-us/library/ms187099%28v=sql.105%29.aspx
Code Design...
SP's, Funcitions, Views, Sub queries, Joins, Triggers etc...
DW Design
DB size and number of reports and historical data details, reduce the normalization and etc....
http://technet.microsoft.com/en-us/library/aa902672%28v=sql.80%29.aspx
2. Create complex analytic queries on large data sets.
Its all based on your current database design, size, required output, data, performance etc..
4. Monitoring and optimizing the performance of the database.
Perfmon, Activity monitor, spotlight, custom queries, DMV's sp_whoisactive, execution plans, and many other thirdparty tools.. and may be best experience will give best view and clarity :)
Note : This is very big topic and its not easy to answer in few words or lines... :) goole it for more details...
Raju Rasagounder Sr MSSQL DBA -
Two different HASH GROUP BY in execution plan
Hi ALL;
Oracle version
select *From v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE 11.1.0.7.0 Production
TNS for Linux: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - ProductionSQL
select company_code, account_number, transaction_id,
decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,
(last_day(to_date('04/21/2010','MM/DD/YYYY')) - min(z.accounting_date) ) age,sum(z.amount)
from
select /*+ PARALLEL(use, 2) */ company_code,substr(account_number, 1, 5) account_number,transaction_id,
decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,use.amount,use.accounting_date
from financials.unbalanced_subledger_entries use
where use.accounting_date >= to_date('04/21/2010','MM/DD/YYYY')
and use.accounting_date < to_date('04/21/2010','MM/DD/YYYY') + 1
UNION ALL
select /*+ PARALLEL(se, 2) */ company_code, substr(se.account_number, 1, 5) account_number,transaction_id,
decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,se.amount,se.accounting_date
from financials.temp2_sl_snapshot_entries se,financials.account_numbers an
where se.account_number = an.account_number
and an.subledger_type in ('C', 'AC')
) z
group by company_code,account_number,transaction_id,decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type)
having abs(sum(z.amount)) >= 0.01explain plan
Plan hash value: 1993777817
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
| 0 | SELECT STATEMENT | | | | 76718 (100)| | | | |
| 1 | PX COORDINATOR | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,02 | P->S | QC (RAND) |
|* 3 | FILTER | | | | | | Q1,02 | PCWC | |
| 4 | HASH GROUP BY | | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,02 | PCWP | |
| 5 | PX RECEIVE | | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,02 | PCWP | |
| 6 | PX SEND HASH | :TQ10001 | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,01 | P->P | HASH |
| 7 | HASH GROUP BY | | 15M| 2055M| 76718 (2)| 00:15:21 | Q1,01 | PCWP | |
| 8 | VIEW | | 15M| 2055M| 76116 (1)| 00:15:14 | Q1,01 | PCWP | |
| 9 | UNION-ALL | | | | | | Q1,01 | PCWP | |
| 10 | PX BLOCK ITERATOR | | 11 | 539 | 1845 (1)| 00:00:23 | Q1,01 | PCWC | |
|* 11 | TABLE ACCESS FULL | UNBALANCED_SUBLEDGER_ENTRIES | 11 | 539 | 1845 (1)| 00:00:23 | Q1,01 | PCWP | |
|* 12 | HASH JOIN | | 15M| 928M| 74270 (1)| 00:14:52 | Q1,01 | PCWP | |
| 13 | BUFFER SORT | | | | | | Q1,01 | PCWC | |
| 14 | PX RECEIVE | | 21 | 210 | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10000 | 21 | 210 | 2 (0)| 00:00:01 | | S->P | BROADCAST |
|* 16 | TABLE ACCESS FULL| ACCOUNT_NUMBERS | 21 | 210 | 2 (0)| 00:00:01 | | | |
| 17 | PX BLOCK ITERATOR | | 25M| 1250M| 74183 (1)| 00:14:51 | Q1,01 | PCWC | |
|* 18 | TABLE ACCESS FULL | TEMP2_SL_SNAPSHOT_ENTRIES | 25M| 1250M| 74183 (1)| 00:14:51 | Q1,01 | PCWP | |
Predicate Information (identified by operation id):
3 - filter(ABS(SUM(SYS_OP_CSR(SYS_OP_MSR(SUM("Z"."AMOUNT"),MIN("Z"."ACCOUNTING_DATE")),0)))>=.01)
11 - access(:Z>=:Z AND :Z<=:Z)
filter(("USE"."ACCOUNTING_DATE"<TO_DATE(' 2010-04-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"USE"."ACCOUNTING_DATE">=TO_DATE(' 2010-04-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
12 - access("SE"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
16 - filter(("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C'))
18 - access(:Z>=:Z AND :Z<=:Z)I have few doubts regarding this execution plan and i am sure my questions would get answered here.
Q-1: Why am i getting two different HASH GROUP BY operations (Operation id 4 & 7) even though there is only a single GROUP BY clause ? Is that due to UNION ALL operator that is merging two different row sources and HASH GROUP BY is being applied on both of them individually ?
Q-2: What does 'BUFFER SORT' (Operation id 13) indicate ? Some time i got this operation and sometime i am not. For some other queries, i have observing around 10GB TEMP space and high cost against this operation. So just curious about whether it is really helpful ? if no, how to avoid that ?
Q-3: Under PREDICATE Section, what does step 18 suggest ? I am not using any filter like this ? access(:Z>=:Z AND :Z<=:Z)aychin wrote:
Hi,
About BUFFER SORT, first of all it is not specific for Parallel Executions. This step in the plan indicates that internal sorting have a place. It doesn't mean that rows will be returned sorted, in other words it doesn't guaranty that rows will be sorted in resulting row set, because it is not the main purpose of this operation. I've previously suggested that the "buffer sort" should really simply say "buffering", but that it hijacks the buffering mechanism of sorting and therefore gets reported completely spuriously as a sort. (see http://jonathanlewis.wordpress.com/2006/12/17/buffer-sorts/ ).
In this case, I think the buffer sort may be a consequence of the broadcast distribution - and tells us that the entire broadcast is being buffered before the hash join starts. It's interesting to note that in the recent of the two plans with a buffer sort the second (probe) table in the hash join seems to be accessed first and broadcast before the first table is scanned to allow the join to occur.
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
+"Science is more than a body of knowledge; it is a way of thinking"+
+Carl Sagan+ -
Awr show this sql as highest cpu consuming sql .Is this sql need tunning?
Hi,
As awr reports showing this sql as the highest CPU consumer
Please consider me newbie and help me to understand this.
Thanking you ..
SQL ordered by CPU Time DB/Inst: Snaps:
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
CPU Elapsed CPU per % Total
Time (s) Time (s) Executions Exec (s) DB Time SQL Id
2,658 2,665 1 2658.18 5.7 5aawbyzqjk8by
select distinct trp.site_id from tas_receipts_process trp , tas_tsa_info tti, ta
s_site ts where trp.tsa_id = tti.tsa_id and tti.status = 1 and ts.site_id = trp
.site_id and (tti.max_install != trp.pushed_rsn and ((tti.max_install = 0 and
(trp.pushed_rsn - trp.curr_rsn) < ts.workahead_count * :1) or (tti.max_installComplete sql =>
select distinct trp.site_id from tas_receipts_process trp , tas_tsa_info tti, tas_site
ts
where
trp.tsa_id = tti.tsa_id
and tti.status = 1
and ts.site_id = trp.site_id
and
tti.max_install != trp.pushed_rsn
and (
(tti.max_install = 0 and
(trp.pushed_rsn - trp.curr_rsn) < ts.workahead_count * :1
) or
(tti.max_install > 0 and
(trp.pushed_rsn - trp.curr_rsn) < ts.workahead_count * :2
) or
(tti.max_install = trp.pushed_rsn and
tti.max_install <> 0
) or
(trp.pushed_time !=
to_date(tti.created_date,'dd-MON-yyhh24:mi:ss') + (1/24/60) * ts.workahead_time
) and
to_date(sysdate,'dd-MON-yyhh24:mi:ss') + (1/24/60) * :3
) > trp.pushed_time
)getting the explain plan for the above sql =>
QL> SELECT plan_table_output FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('g6c8y31xr06vp',0,'ALL'));
PLAN_TABLE_OUTPUT
SQL_ID g6c8y31xr06vp, child number 0
select distinct trp.site_id from tas_receipts_process trp , tas_tsa_info tti, tas_site
ts where trp.tsa_id = tti.tsa_id and tti.status = 1 and ts.site_id = trp.site_id and
(tti.max_install != trp.pushed_rsn and ((tti.max_install = 0 and (trp.pushed_rsn -
trp.curr_rsn) < ts.workahead_count * :1) or (tti.max_install > 0 and (trp.pushed_rsn -
trp.curr_rsn) < ts.workahead_count * :2) or (tti.max_install = trp.pushed_rsn and
tti.max_install <> 0 ) )) or (trp.pushed_time != (to_date(tti.created_date,'dd-MON-yy
hh24:mi:ss') + (1/24/60) * ts.workahead_time) and ((to_date(sysdate,'dd-MON-yy
hh24:mi:ss') + (1/24/60) * :3) > trp.pushed_time))
Plan hash value: 2862358316
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | | | 601K(100)| |
| 1 | HASH UNIQUE | | 4 | 216 | 601K (10)| 02:00:14 |
| 2 | CONCATENATION | | | | | |
| 3 | NESTED LOOPS | | 200M| 10G| 579K (6)| 01:55:52 |
| 4 | MERGE JOIN CARTESIAN| | 2849 | 99715 | 12 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | TAS_SITE | 7 | 70 | 3 (0)| 00:00:01 |
| 6 | BUFFER SORT | | 407 | 10175 | 9 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | TAS_RECEIPTS_PROCESS | 407 | 10175 | 1 (0)| 00:00:01 |
|* 8 | TABLE ACCESS FULL | TAS_TSA_INFO | 70411 | 1306K| 203 (6)| 00:00:03 |
|* 9 | HASH JOIN | | 2 | 108 | 203 (2)| 00:00:03 |
|* 10 | HASH JOIN | | 407 | 14245 | 7 (15)| 00:00:01 |
| 11 | TABLE ACCESS FULL | TAS_SITE | 7 | 70 | 3 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | TAS_RECEIPTS_PROCESS | 407 | 10175 | 3 (0)| 00:00:01 |
|* 13 | TABLE ACCESS FULL | TAS_TSA_INFO | 21474 | 398K| 196 (2)| 00:00:03 |
Query Block Name / Object Alias (identified by operation id):
1 - SEL$1
5 - SEL$1_1 / TS@SEL$1
7 - SEL$1_1 / TRP@SEL$1
8 - SEL$1_1 / TTI@SEL$1
11 - SEL$1_2 / TS@SEL$1_2
12 - SEL$1_2 / TRP@SEL$1_2
13 - SEL$1_2 / TTI@SEL$1_2
Predicate Information (identified by operation id):
7 - filter("TRP"."PUSHED_TIME"<TO_DATE(TO_CHAR(SYSDATE@!),'dd-MON-yy
hh24:mi:ss')+.000694444444444444444444444444444444444445*:3)
8 - filter("TRP"."PUSHED_TIME"<>TO_DATE(INTERNAL_FUNCTION("TTI"."CREATED_DATE"),'dd-M
ON-yy hh24:mi:ss')+.000694444444444444444444444444444444444445*"TS"."WORKAHEAD_TIME")
9 - access("TRP"."TSA_ID"="TTI"."TSA_ID")
filter(("TTI"."MAX_INSTALL"<>"TRP"."PUSHED_RSN" AND (("TTI"."MAX_INSTALL"=0 AND
"TRP"."PUSHED_RSN"-"TRP"."CURR_RSN"<"TS"."WORKAHEAD_COUNT"*:1) OR
("TTI"."MAX_INSTALL">0 AND "TRP"."PUSHED_RSN"-"TRP"."CURR_RSN"<"TS"."WORKAHEAD_COUNT"*:2
) OR ("TTI"."MAX_INSTALL"="TRP"."PUSHED_RSN" AND "TTI"."MAX_INSTALL"<>0)) AND
(LNNVL("TRP"."PUSHED_TIME"<TO_DATE(TO_CHAR(SYSDATE@!),'dd-MON-yy
hh24:mi:ss')+.000694444444444444444444444444444444444445*:3) OR
LNNVL("TRP"."PUSHED_TIME"<>TO_DATE(INTERNAL_FUNCTION("TTI"."CREATED_DATE"),'dd-MON-yy
hh24:mi:ss')+.000694444444444444444444444444444444444445*"TS"."WORKAHEAD_TIME"))))
10 - access("TS"."SITE_ID"="TRP"."SITE_ID")
13 - filter("TTI"."STATUS"=1)
Column Projection Information (identified by operation id):
1 - "TRP"."SITE_ID"[NUMBER,22]
2 - "TS"."SITE_ID"[NUMBER,22], "TS"."WORKAHEAD_COUNT"[NUMBER,22],
"TS"."WORKAHEAD_TIME"[NUMBER,22], "TRP"."TSA_ID"[NUMBER,22],
"TRP"."SITE_ID"[NUMBER,22], "TRP"."CURR_RSN"[NUMBER,22], "TRP"."PUSHED_RSN"[NUMBER,22],
"TRP"."PUSHED_TIME"[DATE,7], "TTI"."TSA_ID"[NUMBER,22], "TTI"."STATUS"[NUMBER,22],
"TTI"."MAX_INSTALL"[NUMBER,22], "TTI"."CREATED_DATE"[DATE,7]
3 - "TS"."SITE_ID"[NUMBER,22], "TS"."WORKAHEAD_COUNT"[NUMBER,22],
"TS"."WORKAHEAD_TIME"[NUMBER,22], "TRP"."TSA_ID"[NUMBER,22],
"TRP"."SITE_ID"[NUMBER,22], "TRP"."CURR_RSN"[NUMBER,22], "TRP"."PUSHED_RSN"[NUMBER,22],
"TRP"."PUSHED_TIME"[DATE,7], "TTI"."TSA_ID"[NUMBER,22], "TTI"."STATUS"[NUMBER,22],
"TTI"."MAX_INSTALL"[NUMBER,22], "TTI"."CREATED_DATE"[DATE,7]
4 - "TS"."SITE_ID"[NUMBER,22], "TS"."WORKAHEAD_COUNT"[NUMBER,22],
"TS"."WORKAHEAD_TIME"[NUMBER,22], "TRP"."TSA_ID"[NUMBER,22],
"TRP"."SITE_ID"[NUMBER,22], "TRP"."CURR_RSN"[NUMBER,22], "TRP"."PUSHED_RSN"[NUMBER,22],
"TRP"."PUSHED_TIME"[DATE,7]
5 - "TS"."SITE_ID"[NUMBER,22], "TS"."WORKAHEAD_COUNT"[NUMBER,22],
"TS"."WORKAHEAD_TIME"[NUMBER,22]
6 - (#keys=0) "TRP"."TSA_ID"[NUMBER,22], "TRP"."SITE_ID"[NUMBER,22],
"TRP"."CURR_RSN"[NUMBER,22], "TRP"."PUSHED_RSN"[NUMBER,22], "TRP"."PUSHED_TIME"[DATE,7]
7 - "TRP"."TSA_ID"[NUMBER,22], "TRP"."SITE_ID"[NUMBER,22],
"TRP"."CURR_RSN"[NUMBER,22], "TRP"."PUSHED_RSN"[NUMBER,22], "TRP"."PUSHED_TIME"[DATE,7]
8 - "TTI"."TSA_ID"[NUMBER,22], "TTI"."STATUS"[NUMBER,22],
"TTI"."MAX_INSTALL"[NUMBER,22], "TTI"."CREATED_DATE"[DATE,7]
9 - (#keys=1) "TRP"."TSA_ID"[NUMBER,22], "TTI"."TSA_ID"[NUMBER,22],
"TS"."SITE_ID"[NUMBER,22], "TRP"."SITE_ID"[NUMBER,22],
"TS"."WORKAHEAD_TIME"[NUMBER,22], "TS"."WORKAHEAD_COUNT"[NUMBER,22],
"TRP"."PUSHED_RSN"[NUMBER,22], "TRP"."PUSHED_TIME"[DATE,7],
"TRP"."CURR_RSN"[NUMBER,22], "TTI"."CREATED_DATE"[DATE,7], "TTI"."STATUS"[NUMBER,22],
"TTI"."MAX_INSTALL"[NUMBER,22]
10 - (#keys=1) "TS"."SITE_ID"[NUMBER,22], "TRP"."SITE_ID"[NUMBER,22],
"TS"."WORKAHEAD_TIME"[NUMBER,22], "TS"."WORKAHEAD_COUNT"[NUMBER,22],
"TRP"."TSA_ID"[NUMBER,22], "TRP"."PUSHED_TIME"[DATE,7], "TRP"."CURR_RSN"[NUMBER,22],
"TRP"."PUSHED_RSN"[NUMBER,22]
11 - "TS"."SITE_ID"[NUMBER,22], "TS"."WORKAHEAD_COUNT"[NUMBER,22],
"TS"."WORKAHEAD_TIME"[NUMBER,22]
12 - "TRP"."TSA_ID"[NUMBER,22], "TRP"."SITE_ID"[NUMBER,22],
"TRP"."CURR_RSN"[NUMBER,22], "TRP"."PUSHED_RSN"[NUMBER,22], "TRP"."PUSHED_TIME"[DATE,7]
13 - "TTI"."TSA_ID"[NUMBER,22], "TTI"."STATUS"[NUMBER,22],
"TTI"."MAX_INSTALL"[NUMBER,22], "TTI"."CREATED_DATE"[DATE,7]
105 rows selected.sizes of concerned objects=>
OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME EXTENTS BYTES_
AZD_SCHM TAS_TSA_INFO TABLE TATSU_DATA_TS 22 7,340,032
AZD_SCHM TAS_TSA_INFO_PK INDEX TATSU_DATA_TS 17 2,097,152
AZD_SCHM TAS_RECEIPTS_PROCESS TABLE TATSU_DATA_TS 1 65,536
AZD_SCHM TAS_RECEIPTS_PROCESS_IDX INDEX TATSU_INDEX_TS 1 65,536
AZD_SCHM TAS_SITE TABLE TATSU_DATA_TS 1 65,536
AZD_SCHM TAS_SITE_NAME_UNQ INDEX TATSU_INDEX_TS 1 65,536
AZD_SCHM TAS_SITE_PK INDEX TATSU_DATA_TS 1 65,536
--------------- ------------------------------ -------------------- -------------------- ------- ----------------Please suggest how to tune this above SQL
Does above sql plan looks good
Any comment and help is highely appreciated.
Thanks & Regards,
IVWivw wrote:
Complete sql =>
Please suggest how to tune this above SQL
Does above sql plan looks good
Any comment and help is highely appreciated.Your SQL is probably incorrectly using the OR operator. In its present form it means this:
select distinct trp.site_id from tas_receipts_process trp , tas_tsa_info tti, tas_site
ts
where
trp.tsa_id = tti.tsa_id
and tti.status = 1
and ts.site_id = trp.site_id
and
tti.max_install != trp.pushed_rsn
and (
(tti.max_install = 0 and
(trp.pushed_rsn - trp.curr_rsn) < ts.workahead_count * :1
) or
(tti.max_install > 0 and
(trp.pushed_rsn - trp.curr_rsn) < ts.workahead_count * :2
) or
(tti.max_install = trp.pushed_rsn and
tti.max_install <> 0
union all
select distinct trp.site_id from tas_receipts_process trp , tas_tsa_info tti, tas_site
ts
where
(trp.pushed_time !=
to_date(tti.created_date,'dd-MON-yyhh24:mi:ss') + (1/24/60) * ts.workahead_time
) and
to_date(sysdate,'dd-MON-yyhh24:mi:ss') + (1/24/60) * :3
) > trp.pushed_time
and not (above condition)So one of the two parts of the SQL are missing all the join predicates between the tables due to the OR operator at top level. The way the SQL is formatted, it might be meant to read like this:
select distinct trp.site_id from tas_receipts_process trp , tas_tsa_info tti, tas_site
ts
where
trp.tsa_id = tti.tsa_id
and tti.status = 1
and ts.site_id = trp.site_id
and
(tti.max_install != trp.pushed_rsn
and (
(tti.max_install = 0 and
(trp.pushed_rsn - trp.curr_rsn) < ts.workahead_count * :1
) or
(tti.max_install > 0 and
(trp.pushed_rsn - trp.curr_rsn) < ts.workahead_count * :2
) or
(tti.max_install = trp.pushed_rsn and
tti.max_install <> 0
or (trp.pushed_time != to_date(tti.created_date,'dd-MON-yyhh24:mi:ss') + (1/24/60) * ts.workahead_time
and to_date(sysdate,'dd-MON-yyhh24:mi:ss') + (1/24/60) * :3 > trp.pushed_time
)But you need to find out what exactly it is supposed to express from a logical point of view. Very likely the present form is simply incorrect. Depending on your data correcting the usage of the OR operator might even render the DISTINCT operator redundant which is probably at present used to remove the many duplicates generated by the missing join predicates.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
Hi,
I have some data and need to group it based on date. please see the below data
pkey--------from_date----------to_date------------amount-------------
1 ---------8-aug-12 ----------31-aug-12---------120
1 ---------31-aug-12 --------1-sep-12--------- 130
1 ---------1-sep-12 ----------2-sep-12--------- 150
1 ---------3-sep-12 ----------4-sep-12--------- 150
1 ---------5-sep-12 ----------7-sep-12--------- 100
1 ---------7-sep-12 ----------8-sep-12--------- 200
1 ---------8-sep-12 ----------20-sep-12---------120
1 ---------20-sep-12 --------1-oct-12--------- 130
1 ---------1-oct-12 ----------8-oct-12--------- 150
and so on.....
I need to group the data when a month finished.. e.g row 1 from_date is 08-aug-12 row 6 to_date is 08-sep-12 means almost 1 month finished so it should group and show the sum of amount with min(from_date) and max(to_Date) and total amount monhtly basis.
I am trying to make a query since many hours but no success.. the data is huge with many keys and dates ... please share your idea/tips/feedback.
Thanks
Edited by: hard_stone on Jan 18, 2013 11:38 AMTry this
with t
as
select 1 pkey, to_date('08-aug-12', 'dd-mon-rr') from_date, to_date('31-aug-12', 'dd-mon-rr') to_date, 120 amount from dual
union all
select 2 pkey, to_date('31-aug-12', 'dd-mon-rr') from_date, to_date('01-sep-12', 'dd-mon-rr') to_date, 130 amount from dual
union all
select 3 pkey, to_date('01-sep-12', 'dd-mon-rr') from_date, to_date('02-sep-12', 'dd-mon-rr') to_date, 150 amount from dual
union all
select 4 pkey, to_date('03-sep-12', 'dd-mon-rr') from_date, to_date('04-sep-12', 'dd-mon-rr') to_date, 150 amount from dual
union all
select 5 pkey, to_date('05-sep-12', 'dd-mon-rr') from_date, to_date('07-sep-12', 'dd-mon-rr') to_date, 100 amount from dual
union all
select 6 pkey, to_date('07-sep-12', 'dd-mon-rr') from_date, to_date('08-sep-12', 'dd-mon-rr') to_date, 200 amount from dual
union all
select 7 pkey, to_date('08-sep-12', 'dd-mon-rr') from_date, to_date('20-sep-12', 'dd-mon-rr') to_date, 120 amount from dual
union all
select 8 pkey, to_date('20-sep-12', 'dd-mon-rr') from_date, to_date('01-oct-12', 'dd-mon-rr') to_date, 130 amount from dual
union all
select 9 pkey, to_date('01-oct-12', 'dd-mon-rr') from_date, to_date('08-oct-12', 'dd-mon-rr') to_date, 150 amount from dual
select min(from_date) from_date, max(to_date) to_date, sum(amount) amount
from (
select pkey, from_date, to_date, amount, next_month
from t
model
dimension by (pkey)
measures (from_date, to_date, amount, to_date('19000101', 'yyyymmdd') next_month, add_months(from_date, 1) temp_month)
rules upsert
next_month[any] = case when next_month[cv(pkey)-1] >= to_date[cv(pkey)] then next_month[cv(pkey)-1]
else temp_month[cv(pkey)] end
group by next_month;
FROM_DATE TO_DATE AMOUNT
08-AUG-12 08-SEP-12 850
08-SEP-12 08-OCT-12 400 -
Need a tip for an image viewer
Hi guys i'm trying to make an Image Viewer app and i couldn't find a way to tell my main AS3 class which is the Image that opened the application.
I already specified images in windows to "Open with" my Image Viewer, and indeed my application executes but, is there a way to register the path of the file that opened my application?
So far i've only seen drag&drop implementations, and i imagine that your answer will be a negative but i'm trying to find a solution to this because it's the only problem that keeps me from replacing the Windows Picture and Fax Viewer, so any tip, any idea, will be appreciated.Thanks mate!
It is now possible to do it with AIR 1.5.1
In case someone needs it, here's the code that does the magic (AS3.0)
import flash.display.NativeWindow;
import flash.desktop.NativeApplication;
import flash.events.InvokeEvent;
var fileLoader:Loader = new Loader();
addChild(fileLoader);
NativeApplication.nativeApplication.addEventListener(InvokeEvent.INVOKE, handleInitializationArgs);
function handleInitializationArgs(event:InvokeEvent):void
// get the application arguments from
// the InvokeEvent object
var args:Array = event.arguments as Array;
// if arguments were provided to the application
if (args.length)
// of the arguments provided, assume the first
// is of the associated file type
var fileToOpen:String = String(args[0]);
// load that argument as a url into the loader
fileLoader.load(new URLRequest(fileToOpen)); -
Need some tips for premiere cs6 video editing
I have recorded professionally in a sound proof room with a microphone mc's, it's really good video quality and the sound quality is perfect too.
I have recorded with 2 different devices at different angles i'm going to remove sound from both video clips and use the recorded mp3 file made in Adobe Audition CS6 (amazing program) I want to have the 2nd angle recording put in the video so it shows the main video then the 2nd in a little box in the top corner, how can i do this?
Also any tips people can suggest for me on Speedgrade CS6 or After effects CS6 that will improve the viewing experience will be very appreciated.
Last thing playback in Premiere and after effects is very slow, no computer faults my laptop is extremely fast but one thing i've had this laptop about 8 months and not updates the graphics card, would it be that or is there a way to fix this? when i try playblack in after effects it says something like '0.9/29 frames (not real time), i downloaded quicktime i thought this was real time?
Thank you in advancedHere are some Tutorials
http://www.youtube.com/playlist?list=PL507B3498B4479B96&feature=plcp
http://forums.adobe.com/thread/913334
http://forums.adobe.com/thread/845731
http://forums.adobe.com/message/3234794
A "crash course" http://forums.adobe.com/thread/761834
A Video Primer for Premiere http://forums.adobe.com/thread/498251
Premiere Tutorial http://forums.adobe.com/thread/424009
And http://forums.adobe.com/message/2738611
And http://blogs.adobe.com/kevinmonahan/2012/08/28/free-video-tutorial-samples-from-learning-p remiere-pro-cs6/
-and more from Kevin http://forums.adobe.com/message/4714153
http://blogs.adobe.com/premiereprotraining/2010/06/video_tutorials_didacticiels_t.html
And http://blogs.adobe.com/premiereprotraining/2010/06/how_to_search_for_premiere_pro.html
And http://bellunevideo.com/tutlist.php
Premiere Pro Wiki http://premierepro.wikia.com/wiki/Main_Page
Tutorial http://www.tutorialized.com/tutorials/Premiere/1
Tutorial http://www.dvxuser.com/V6/forumdisplay.php?f=21
Tutorial HD to SD w/CS4 http://bellunevideo.com/tutorials/CS4_HD2SD/CS4_HD2SD.html
Exporting to DVD http://help.adobe.com/en_US/premierepro/cs/using/WS3E252E59-6BE5-4668-A12E-4ED0348C3FDBa.h tml
And http://help.adobe.com/en_US/premierepro/cs/using/WSCDE15B03-1236-483f-BBD4-263E77B445B9.ht ml
Color correction http://forums.adobe.com/thread/892861
After Effects Tutorials http://www.videocopilot.net/
Surround Sound http://forums.adobe.com/thread/517372
Photo Scaling for Video http://forums.adobe.com/thread/450798
-Too Large May = Crash http://forums.adobe.com/thread/879967
-And another crash report http://forums.adobe.com/thread/973935
CS6 http://www.dvxuser.com/V6/showthread.php?282290-New-Tutorial-Working-Faster-in-Premiere-Pr o-CS6
Video Scaling https://blogs.adobe.com/premiereprotraining/2010/10/scaling-in-premiere-pro-cs5.html
Encore http://tv.adobe.com/show/learn-encore-cs4/
Authoring http://www.videocopilot.net/tutorials/dvd_authoring/
Encore Tutorial http://www.precomposed.com/blog/2009/05/encore-tutorial/
And more Encore http://library.creativecow.net/articles/devis_andrew/
Regions and NTSC vs PAL http://forums.adobe.com/thread/951042
-and Regions http://forums.adobe.com/thread/895223
PDF http://blogs.adobe.com/adobecustomersuccess/2011/05/14/help-support-pages-for-creative-sui te-applications/ -
Hi
I need to know the place in the web where I can find Labview examples.
I have been in National Instrument's web page but I couldn't find the
examples.I will appriciate to have the link.
Thanks,
ArashHi Arash;
LabVIEW already ships with a lot of very useful examples. Just go to the help and look for the examples.
If you do a search for LabVIEW examples at www.ni.com you will get lots of useful examples too.
At NI Developer Zone, if you go to Development Library > Measurement and Automation Software > LabVIEW, you will also find lots of examples and insights by subject.
Regards;
Enrique
www.vartortech.com -
Need a tip for a Fieldpoint program
Hi,
I have a main program which includes several subprograms. The used hardware is Fieldpoint. (several AI)
All subprograms uses different channels of those AI.
What is the better way to deal with this AI:
- reading all AI's (with "fp read") in a array and reading parts of the array in the subprograms
- or reading appropriated AI's in the subprograms separatedly (with "fp read")
Thanks for any hint
YvesHi Yves,
This code is not FP specific, but it demostrates one of the thing you can set up with FGVs.
Note that in your case, a cluster or an array, or maybe an cluster with one array for each AI module would do the job inside of the FGV.
Feel free to ask any question
Message Edité par TiTou le 11-09-2006 02:18 PM
When my feet touch the ground each morning the devil thinks "bloody hell... He's up again!"
Attachments:
SR.zip 42 KB -
Hello everybody,
Very first post here and absolutely a beginner to java.Looking 4 a flying start.This is what i would like to do and if somebody can guide me how i could go step by step would be greatful.Here it goes...........
At first, a program should be written that runs on two PCs, at least one of the PCs having Matlab installed. There should be a stand-alone user interface which is connected to the main program via network sockets, i.e. via the internal loopback interface. The user can select files belonging to a project, specify a string giving the
Matlab command for the desired function and give the names of the variables he wants to get back. Furthermore, IP-address of the desired remote machine, password and so on have to be input.
If all inputs are done and given to the main program, this program connects to the chosen remote entity and sends the files, commands and variables to it.
I would be glad if somebody can gimmi a start up.
Thanks in advancefirst of all try doing what your trying to do in a stand alone enviorment.... it sould not be that complecated....
to execute external commands have a look at Runtime, Process etc
http://java.sun.com/j2se/1.4.1/docs/api/java/lang/Runtime.html
http://java.sun.com/j2se/1.4.1/docs/api/java/lang/Process.html
then create a client server interface, and start by getting the client and server comunicate by simple messaging and then do the other more complecated data exchanges....
to get the comunication part done have a look at ServerSocket, Socket etc.
http://java.sun.com/j2se/1.4.1/docs/api/java/net/ServerSocket.html
http://java.sun.com/j2se/1.4.1/docs/api/java/net/Socket.html
hope this helps
oaq -
I have an iGo universal charger I bought recently as a spare in case of something happening to the factory charger. I noticed none of the tips worked. Is there a tip for the iGo I have that will fit my PC? If there is, I just need the tip # for it.
TW45Yes there is only the dv6700z (AMD) and dv6700t (Intel) and they use the same power supply.
-
Please I need some tips of video editing.
Hi! My name is Bruno Rauch, i'm brazilian and I need some tips for video editing..
Here below I'm going to put a link of two videos that I had made days ago...And I'm very grateful if you guys could see and tell me some tips to encrease my video quality.
Bubble Gun Treffen 2014 - Águas de Lindóia - SP - YouTube
Trip to San Pedro of Atacama - YouTube
Thanks for your help.What I can suggest is that you look into using image stabilization. I know that the vehicle bounces around, but if you have the front of the car in the shot, or a mountain range in the distance, the software should be able to stabilize your footage some.
Having said that, I should also point out that there are ways to move more smoothly with your camera. There are devices available to steady the camera, called Steadycam. Here is a link.
B&H Photo Video
If you can't afford a Steadycam, just using a tripod can help. keep it folded up, put the camera on it, pick it up under the head, and it acts as a balancing weight to smooth out the movement.
Using such things will also help you keep the horizon straight (except when the car is turning).
One more thing. Carry a bottle of window cleaner and lots of clean rags. -
What to look for in Execution plans ?
Hi Pals,
Assuming the query execution is slow , I have collected the xml plan. What should I look for in the actual execution plan . What are the top 10 things I need to watch out for?
I know this is a broad and generic question but I am looking for anyone who is experienced in query tuning to answer this question.
Thanks in Advance.Reading execution plans is a bit of an art. But a couple of things:
1) Download and install SQL Sentry Plan Explorer (www.sqlsentry.net). This is a free tool that in msny cases gives a better experience to look at execution plans, particularly big and ugly ones.
2) I usually look for the thickest arrows, as thickness indicates the number of rows being processed.
3) Look for deviations between estimates and actual values, as these deviations are often the source for bad performance. In Plan Explorer, you can quickly flip between the too. In SSMS you need to look at the popup for every operator. (But again, it is
the operator with fat arrows that are of most interest - and those before them.
4) The way to read the plan is that the left-most operator asks the operators it is connected to for data. The net effect is that data flows from right to left, and the right-hand side if often more interesting.
5) Don't pay much attention to the percentages about operator cost. These percentages are estimates only,
not actual values. They are only reliable if the estimates are correct all the way through - and when you have a bad plan that is rarely the case.
This was the overall advice. Then there is more specific one: are indexes being used when expected? Note that scans are necessarily not bad. Sometimes your problem is that you have a loop join + index seek, when you should have had two scans and a hash join.
Try to get a grip of how you would process the query, if you had to do it manually. Does the plan match that idea?
Erland Sommarskog, SQL Server MVP, [email protected] -
How to capture the execution plan for a query
HI All,
Can anyone please help me in finding out the command to capture the execution plan for a query.
Execution plan for select * from EMP where <Condtions>
it is getting executed successfully but i need to get the proper execution plan for the same.
Thanks971830 wrote:
i want to know where execution plan gets generated??
in PMON of server process or in shared pool??
i know that optimixer create execution plan..It is stored in Library Cache (present inside Shared Pool ).
select * from v$sql_plan;An absolute beautiful white paper :
Refer this -- www.sagelogix.com/sagelogix/SearchResults/SAGE015052
Also -- http://www.toadworld.com/KNOWLEDGE/KnowledgeXpertforOracle/tabid/648/TopicID/XPVSP/Default.aspx
HTH
Ranit B. -
SQL Query C# Using Execution Plan Cache Without SP
I have a situation where i am executing an SQL query thru c# code. I cannot use a stored procedure because the database is hosted by another company and i'm not allowed to create any new procedures. If i run my query on the sql mgmt studio the first time
is approx 3 secs then every query after that is instant. My query is looking for date ranges and accounts. So if i loop thru accounts each one takes approx 3 secs in my code. If i close the program and run it again the accounts that originally took 3 secs
now are instant in my code. So my conclusion was that it is using an execution plan that is cached. I cannot find how to make the execution plan run on non-stored procedure code. I have created a sqlcommand object with my queary and 3 params. I loop thru each
one keeping the same command object and only changing the 3 params. It seems that each version with the different params are getting cached in the execution plans so they are now fast for that particular query. My question is how can i get sql to not do this
by either loading the execution plan or by making sql think that my query is the same execution plan as the previous? I have found multiple questions on this that pertain to stored procedures but nothing i can find with direct text query code.
Bob;
I did the query running different accounts and different dates with instant results AFTER the very first query that took the expected 3 secs. I changed all 3 fields that i've got code for parameters for and it still remains instant in the mgmt studio but
still remains slow in my code. I'm providing a sample of the base query i'm using.
select i.Field1, i.Field2,
d.Field3 'Field3',
ip.Field4 'Field4',
k.Field5 'Field5'
from SampleDataTable1 i,
SampleDataTable2 k,
SampleDataTable3 ip,
SampleDataTable4 d
where i.Field1 = k.Field1 and i.Field4 = ip.Field4
i.FieldDate between '<fromdate>' and '<thrudate>'
and k.Field6 = <Account>
Obviously the field names have been altered because the database is not mine but other then the actual names it is accurate. It works it just takes too long in code as described in the initial post.
My params setup during the init for the connection and the command.
sqlCmd.Parameters.Add("@FromDate", SqlDbType.DateTime);
sqlCmd.Parameters.Add("@ThruDate", SqlDbType.DateTime);
sqlCmd.Parameters.Add("@Account", SqlDbType.Decimal);
Each loop thru the code changes these 3 fields.
sqlCommand.Parameters["@FromDate"].Value = dtFrom;
sqlCommand.Parameters["@ThruDate"].Value = dtThru;
sqlCommand.Parameters["@Account"].Value = sAccountNumber;
SqlDataReader reader = sqlCommand.ExecuteReader();
while (reader.Read())
reader.Close();
One thing i have noticed is that the account field is decimal(20,0) and by default the init i'm using defaults to decimal(10) so i'm going to change the init to
sqlCmd.Parameters["@Account"].Precision = 20;
sqlCmd.Parameters["@Account"].Scale = 0;
I don't believe this would change anything but at this point i'm ready to try anything to get the query running faster.
Bob;
Maybe you are looking for
-
and going into settings on my ios device, or going to the website im currently on, and go to store, because Neither of them are working. Please help.
-
Which is better notify() or notifyAll() ??
Hello , i know that notify() is responsible to wake up on of the threads waiting to access object and notifyAll() responsible to wake up all threads waiting to access same object but what it is the other different between these two methods ?? and whi
-
Functionality of Profit Center and Cost Center
Hi, I am unable to understand the functionality of Profit center and cost center, I know that cost center represents area within an organization for which you will like to see the expenses for a particular time period. for example in an organisation
-
Addiing new column in QA33 Report
Dear Sir, We need to add a new Column (a new field) in QA33 Report . This new field is "Result Recording Date" and for this we need to make modification in Std QA33 Report . Kindly guide us as how can we make the enhancement pl . Rgds Sonia Agarwal
-
I install the Cumulative packages of PeopleSoft Enterprise Human Resources Management System and Campus Solutions 9.0 - Maintenance Pack 6 - Multi-Language for my HRCS system. After I deploy the COBOL files, I try to compile them. Most of the files c