Diagram size vs single-use sub-vi

I'm guessing this has been beaten to death elsewhere but I could not find it...
I feel like I've been accumulating conflicting messages:
- block diagram should fit on one page
vs
- don't create sub-VI's just to save diagram space
 So, now I'm in a situation where I have a case frame that calls one sub-vi to do "X".   And because of an added feature I need to  add inside the case frame another instance of "X" with different inputs -- that is, the same function needs to "do more".   But the frame is already "full" and I would have to enlarge the diagram beyond the visible page to make space.   
Now, I wonder if the admonishment against using sub-vi's to save space is only to prevent users making a sub-vi with a random collection of un-related functionality?  Because in this case it is more an "extension" of existing functionality.  So if I pushed this down into a new sub-vi it would contain related logic: the old functionality with the new feature added.
I'm just thinking about this because someday someone else may have to look at this and I don't want them to have to spend days just to understand what I was trying to do.   Or, worse, have it end up in the R-G thread.
A small detail it is, but many small details can make a big mess...
Thanks and Best Regards,
-- J.

Lots of good points from all over!.
Create a sub-vi whenever
A) You can abstract a chunk of code that you can name.  e.g "Get Reading.vi"or "Set Output.vi"
B) You can encapluate related Functions or data elements. e.g. or "DUT State Cashe.vi" with modes<Get |Set | Init> to operate on the same data
C)You Identify a group of operations are logically related by data or timing, that is to say ther are Coherent.
D)You can imagine Reuse of the same code in a number of projects (look at Time to Excel.vi  below)
Good developers learn by practice just exactly when to break out a sub-vi.
Great developers pull out their hair triing to figure out how their own code, that they wrote as newbie, ever worked.
Jeff

Similar Messages

  • What Are the Advantages of using Sub VI

    I am just wondering what the advantages are when using sub VI. 

    You use a subVI for a lot of reasons. Just like a sub-routine in text languages, it's code that may be called several times by the main. Instead of duplicating a bunch of functions and wires, create one subVI. You can also use a subVI to make your block diagram more manageable. A good rule of thumb to follow is to keep your block diagram no larger that a single screen. Having to scroll back and forth over several screens makes the diagram harder to modify and debug. Using subVIs, your diagram is smaller. It's also much easier to debug a subVI that does some limited function that trying to debug a large main. It's much easier to test a subVI with a couple of inputs and outputs than one with dozens or even hundreds.

  • ORA-01427:single-row sub query returns more than one row (group by)

    Hello every one, I am very new to this field , and Right now I am working with this sql, where BEG_BAL_WKST,WKST_RECEIVED_NUM,WKST_PROCESSED_NUM,WKST_CANCELED_NUM are needs to be grouped by,but I am getting the "single-row sub query returns more than one row".
    This is the query I am using in my source qualifier:
    select
    SUM(tmp.WIP_TO_BILL_LOC_AMT) AS WIP_TO_BILL_LOC_AMT,
    sum(tmp.REALIZATION_LOC_AMT) AS REALIZATION_LOC_AMT,
    SUM(tmp.NEG_REAL_LOC_AMT) AS NEG_REAL_LOC_AMT,
    sum(tmp.POS_REAL_LOC_AMT) AS POS_REAL_LOC_AMT,
    sum(tmp.BILL_IN_ADVANCE_LOC_AMT) AS BILL_IN_ADVANCE_LOC_AMT,
    sum(tmp.CARRY_FORWARD_LOC_AMT) AS CARRY_FORWARD_LOC_AMT,
    sum(tmp.BILL_TO_CLIENT_LOC_AMT) AS BILL_TO_CLIENT_LOC_AMT,
    sum(tmp.REMAIN_WIP_TO_BILL_LOC_AMT) REMAIN_WIP_TO_BILL_LOC_AMT,
    sum(tmp.AR_INV_AMT) AS AR_INV_AMT,
    sum(tmp.AR_TAX_AMT) AS AR_TAX_AMT,
    tmp.BEG_BAL_WKST_NUM AS BEG_BAL_WKST_NUM,
    tmp.WKST_RECEIVED_NUM AS WKST_RECEIVED_NUM,
    tmp.WKST_PROCESSED_NUM AS WKST_PROCESSED_NUM,
    tmp.WKST_CANCELED_NUM AS WKST_CANCELED_NUM,
    tmp.DURATION AS DURATION,
    tmp.NUM_DAYS AS NUM_DAYS,
    tmp.NUM_HOURS AS NUM_HOURS,
    tmp.NUM_MINUTES AS NUM_MINUTES,
    tmp.NUM_SECONDS AS NUM_SECONDS,
    tmp.LEAD_PROJECT_OFFICE_CODE AS LEAD_PROJECT_OFFICE_CODE,
    tmp.LEAD_PROJECT_TEAM_CODE AS LEAD_PROJECT_TEAM_CODE,
    tmp.ORG_ID AS ORG_ID,
    tmp.RPT_DATE AS RPT_DATE,
    tmp.RPT_DATE_WID AS RPT_DATE_WID,
    tmp.LOCAL_CURR_CODE AS LOCAL_CURR_CODE,
    tmp.USD_EXCH_RATE AS USD_EXCH_RATE,
    tmp.EUR_EXCH_RATE AS EUR_EXCH_RATE,
    tmp.GBP_EXCH_RATE AS GBP_EXCH_RATE
    from(
    SELECT
    WIP_TO_BILL_LOC_AMT as WIP_TO_BILL_LOC_AMT ,
    REALIZATION_LOC_AMT AS REALIZATION_LOC_AMT,
    NEG_REAL_LOC_AMT AS NEG_REAL_LOC_AMT ,
    POS_REAL_LOC_AMT AS POS_REAL_LOC_AMT,
    BILL_IN_ADVANCE_LOC_AMT AS BILL_IN_ADVANCE_LOC_AMT ,
    CARRY_FORWARD_loc_AMT AS CARRY_FORWARD_LOC_AMT,
    bill_to_client_LOC_AMT AS BILL_TO_CLIENT_LOC_AMT ,
    REMAIN_WIP_TO_BILL_LOC_AMT AS REMAIN_WIP_TO_BILL_LOC_AMT,
    AR_inv_AMT AS AR_INV_AMT,
    ar_tax_amt AS AR_TAX_AMT,
    (SELECT count(distinct(RPAD(INTEGRATION_ID,32)))
    FROM wc_twfs_olb_invoice_history_f
    WHERE ((inv_status_type='FIN'AND inv_status_code NOT IN ('COMPLETE','PROCESSED'))
    OR (inv_status_type='WS' AND inv_status_code NOT IN ('PRC'))) --COMPLETED
    AND to_char((sysdate-5),'YYYYMMDD') between to_char(status_start_dt,'YYYYMMDD') and to_char(status_end_dt,'YYYYMMDD')group by rpad(integration_id,32)) AS BEG_BAL_WKST_NUM ,
    (SELECT count(distinct(RPAD(INTEGRATION_ID,32)))
    FROM wc_twfs_olb_invoice_history_f
    WHERE (inv_status_code='NEW')
    AND to_char((sysdate-4),'YYYYMMDD') between to_char(status_start_dt,'YYYYMMDD') and to_char(status_end_dt,'YYYYMMDD')group by rpad(integration_id,32))AS WKST_RECEIVED_NUM ,
    (SELECT count(distinct(RPAD(INTEGRATION_ID,32)))
    FROM wc_twfs_olb_invoice_history_f
    WHERE ((inv_status_type='FIN' and inv_status_code IN ('COMPLETE','PROCESSED'))
    OR (inv_status_type='WS' AND inv_status_code IN ('PRC'))) --COMPLETED
    AND to_char((sysdate-4),'YYYYMMDD') between to_char((status_start_dt),'YYYYMMDD') and to_char((status_end_dt),'YYYYMMDD')group by rpad(integration_id,32))AS WKST_PROCESSED_NUM ,
    (SELECT count(distinct(RPAD(INTEGRATION_ID,32)))
    FROM wc_twfs_olb_invoice_history_f
    WHERE (inv_status_type='FIN' AND inv_status_code='CANCELLED')
    AND to_char((sysdate-4),'YYYYMMDD') between to_char((status_start_dt),'YYYYMMDD') and to_char((status_end_dt),'YYYYMMDD')group by rpad(integration_id,32)) AS WKST_CANCELED_NUM,
    DURATION AS DURATION,
    NUM_DAYS AS NUM_DAYS,
    NUM_HOURS AS NUM_HOURS,
    NUM_MINUTES AS NUM_MINUTES,
    NUM_SECONDS AS NUM_SECONDS,
    lead_project_office_code AS LEAD_PROJECT_OFFICE_CODE,
    lead_project_team_code AS LEAD_PROJECT_TEAM_CODE,
    org_id AS ORG_ID,
    trunc(sysdate-1) AS RPT_DATE,
    to_char((sysdate-1),'YYYYMMDD') AS RPT_DATE_WID,
    --last_day(a.report_date) mth_end_dt,
    LOC_CURR_CODE AS LOCAL_CURR_CODE,
    usd_exch_rate AS USD_EXCH_RATE,
    eur_exch_rate AS EUR_EXCH_RATE,
    gbp_exch_rate AS GBP_EXCH_RATE
    FROM Wc_twfs_olb_invoice_history_f
    Where
    RPT_DT_MCAL_PERIOD_WID =(select max(RPT_DT_MCAL_PERIOD_WID)from Wc_twfs_olb_invoice_history_f))tmp
    group by BEG_BAL_WKST_NUM,WKST_RECEIVED_NUM,WKST_PROCESSED_NUM,WKST_CANCELED_NUM,DURATION,NUM_DAYS,NUM_HOURS,NUM_MINUTES,NUM_SECONDS,
    LEAD_PROJECT_OFFICE_CODE,LEAD_PROJECT_TEAM_CODE,ORG_ID,RPT_DATE,RPT_DATE_WID,
    LOCAL_CURR_CODE,USD_EXCH_RATE,EUR_EXCH_RATE,GBP_EXCH_RATE;
    Can you please suggest me what to do next, and what would be the solution to this.
    Thanks a lot in advance. please show me some direction.

    you may want to change it something like
    SELECT SUM(Wip_To_Bill_Loc_Amt) AS Wip_To_Bill_Loc_Amt,
           SUM(Realization_Loc_Amt) AS Realization_Loc_Amt,
           SUM(Neg_Real_Loc_Amt) AS Neg_Real_Loc_Amt,
           SUM(Pos_Real_Loc_Amt) AS Pos_Real_Loc_Amt,
           SUM(Bill_In_Advance_Loc_Amt) AS Bill_In_Advance_Loc_Amt,
           SUM(Carry_Forward_Loc_Amt) AS Carry_Forward_Loc_Amt,
           SUM(Bill_To_Client_Loc_Amt) AS Bill_To_Client_Loc_Amt,
           SUM(Remain_Wip_To_Bill_Loc_Amt) AS Remain_Wip_To_Bill_Loc_Amt,
           SUM(Ar_Inv_Amt) AS Ar_Inv_Amt,
           SUM(Ar_Tax_Amt) AS Ar_Tax_Amt,
           COUNT(DISTINCT CASE
                   WHEN ((Inv_Status_Type = 'FIN' AND
                        Inv_Status_Code NOT IN ('COMPLETE', 'PROCESSED')) OR
                        (Inv_Status_Type = 'WS' AND Inv_Status_Code NOT IN ('PRC'))) --COMPLETED
                        AND To_Char((SYSDATE - 5), 'YYYYMMDD') BETWEEN
                        To_Char(Status_Start_Dt, 'YYYYMMDD') AND
                        To_Char(Status_End_Dt, 'YYYYMMDD') THEN
                    Rpad(Integration_Id, 32)
                 END) AS Beg_Bal_Wkst_Num,
           /*(SELECT COUNT(DISTINCT(Rpad(Integration_Id, 32)))
              FROM Wc_Twfs_Olb_Invoice_History_f
             WHERE ((Inv_Status_Type = 'FIN' AND
                   Inv_Status_Code NOT IN ('COMPLETE', 'PROCESSED')) OR
                   (Inv_Status_Type = 'WS' AND Inv_Status_Code NOT IN ('PRC'))) --COMPLETED
               AND To_Char((SYSDATE - 5), 'YYYYMMDD') BETWEEN
                   To_Char(Status_Start_Dt, 'YYYYMMDD') AND
                   To_Char(Status_End_Dt, 'YYYYMMDD')
             GROUP BY Rpad(Integration_Id, 32)) AS Beg_Bal_Wkst_Num,
           (SELECT COUNT(DISTINCT(Rpad(Integration_Id, 32)))
              FROM Wc_Twfs_Olb_Invoice_History_f
             WHERE (Inv_Status_Code = 'NEW')
               AND To_Char((SYSDATE - 4), 'YYYYMMDD') BETWEEN
                   To_Char(Status_Start_Dt, 'YYYYMMDD') AND
                   To_Char(Status_End_Dt, 'YYYYMMDD')
             GROUP BY Rpad(Integration_Id, 32)) AS Wkst_Received_Num,
           (SELECT COUNT(DISTINCT(Rpad(Integration_Id, 32)))
              FROM Wc_Twfs_Olb_Invoice_History_f
             WHERE ((Inv_Status_Type = 'FIN' AND
                   Inv_Status_Code IN ('COMPLETE', 'PROCESSED')) OR
                   (Inv_Status_Type = 'WS' AND Inv_Status_Code IN ('PRC'))) --COMPLETED
               AND To_Char((SYSDATE - 4), 'YYYYMMDD') BETWEEN
                   To_Char((Status_Start_Dt), 'YYYYMMDD') AND
                   To_Char((Status_End_Dt), 'YYYYMMDD')
             GROUP BY Rpad(Integration_Id, 32)) AS Wkst_Processed_Num,
           (SELECT COUNT(DISTINCT(Rpad(Integration_Id, 32)))
              FROM Wc_Twfs_Olb_Invoice_History_f
             WHERE (Inv_Status_Type = 'FIN' AND Inv_Status_Code = 'CANCELLED')
               AND To_Char((SYSDATE - 4), 'YYYYMMDD') BETWEEN
                   To_Char((Status_Start_Dt), 'YYYYMMDD') AND
                   To_Char((Status_End_Dt), 'YYYYMMDD')
             GROUP BY Rpad(Integration_Id, 32)) AS Wkst_Canceled_Num,*/
           Duration AS Duration,
           Num_Days AS Num_Days,
           Num_Hours AS Num_Hours,
           Num_Minutes AS Num_Minutes,
           Num_Seconds AS Num_Seconds,
           Lead_Project_Office_Code AS Lead_Project_Office_Code,
           Lead_Project_Team_Code AS Lead_Project_Team_Code,
           Org_Id AS Org_Id,
           Trunc(SYSDATE - 1) AS Rpt_Date,
           To_Char((SYSDATE - 1), 'YYYYMMDD') AS Rpt_Date_Wid,
           --last_day(a.report_date) mth_end_dt,
           Loc_Curr_Code AS Local_Curr_Code,
           Usd_Exch_Rate AS Usd_Exch_Rate,
           Eur_Exch_Rate AS Eur_Exch_Rate,
           Gbp_Exch_Rate AS Gbp_Exch_Rate
      FROM Wc_Twfs_Olb_Invoice_History_f
    WHERE Rpt_Dt_Mcal_Period_Wid =
           (SELECT MAX(Rpt_Dt_Mcal_Period_Wid)
              FROM Wc_Twfs_Olb_Invoice_History_f)
    GROUP BY Beg_Bal_Wkst_Num,
              Wkst_Received_Num,
              Wkst_Processed_Num,
              Wkst_Canceled_Num,
              Duration,
              Num_Days,
              Num_Hours,
              Num_Minutes,
              Num_Seconds,
              Lead_Project_Office_Code,
              Lead_Project_Team_Code,
              Org_Id,
              Rpt_Date,
              Rpt_Date_Wid,
              Local_Curr_Code,
              Usd_Exch_Rate,
              Eur_Exch_Rate,
              Gbp_Exch_Rate;Edited by: 986006 on Mar 4, 2013 1:08 PM

  • Proper array size to be used in bulk insert

    What is the proper array size to be used in bulk insert?
    I have around 1 million records. Should I insert them all at a time or distribute them over many iterations.

    I'd generally expect external tables to be more efficient than SQL*Loader if only because you don't have to spend cycles loading data into a staging table. Depending on the file, Oracle may be able to access the data in parallel via the external table interface.
    From a pure efficiency standpoint, it would be best if the processing could be encapsulated into a single multi-table insert statement. Whether that is realistic, of course, depends on your logic, how complicated that SQL statement would be, etc. If you have to resort to PL/SQL, bulk processing is surely the way to go. it doesn't matter too much what LIMIT size you choose-- something like 100 or 1000 is generally appropriate-- but the marginal difference is pretty small. On a load of only 1 million rows, it may not be particularly easy to measure.
    Justin

  • Can i use Sub Query Factoring Here ?

    Hi;
    SQL>SELECT * FROM V$VERSION;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - ProductionSQL
    select /*+ PARALLEL(det, 4) */ '12062' snapshot_id,det.journal_entry_line_id, det.accounting_date,det.company_code,det.account_number,
    det.transaction_id, det.transaction_id_type, det.amount,det.currency_code,det.debit_or_credit,det.category,det.subcategory,det.reference1,det.reference1_type,
    det.reference2,det.reference2_type,det.gl_batch_id,det.marketplace_id,det.cost_center,det.gl_product_line,det.location,det.project,det.sales_channel,
    det.created_by,det.creation_date,det.last_updated_by,det.last_updated_date,agg.age,last_day(to_date('04/21/2010','MM/DD/YYYY')) snapshot_day
    from
    select company_code, account_number, transaction_id,
    decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,
    (last_day(to_date('04/21/2010','MM/DD/YYYY')) - min(z.accounting_date) ) age,sum(z.amount)
    from
         select /*+ PARALLEL(use, 2) */    company_code,substr(account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,use.amount,use.accounting_date
         from financials.unbalanced_subledger_entries use
         where use.accounting_date >= to_date('04/21/2010','MM/DD/YYYY')
         and use.accounting_date < to_date('04/21/2010','MM/DD/YYYY') + 1
    UNION ALL
         select /*+ PARALLEL(se, 2) */  company_code, substr(se.account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,se.amount,se.accounting_date
         from financials.temp2_sl_snapshot_entries se,financials.account_numbers an
         where se.account_number = an.account_number
         and an.subledger_type in ('C', 'AC')
    ) z
    group by company_code,account_number,transaction_id,decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type)
    having abs(sum(z.amount)) >= 0.01
    ) agg,
         select /*+ PARALLEL(det, 2) */ det.journal_entry_line_id,  det.accounting_date, det.company_code, det.account_number, det.transaction_id,  decode(det.transaction_id_type, 'CollectionID', 'SettlementGroupID', det.transaction_id_type) transaction_id_type,
         det.amount, det.currency_code, det.debit_or_credit, det.category, det.subcategory, det.reference1, det.reference1_type, det.reference2, det.reference2_type,
         det.gl_batch_id, det.marketplace_id, det.cost_center, det.gl_product_line, det.location, det.project, det.sales_channel, det.created_by, det.creation_date,
         det.last_updated_by, det.last_updated_date
         from financials.unbalanced_subledger_entries det
         where accounting_date >= to_date('04/21/2010','MM/DD/YYYY')
         and accounting_date < to_date('04/21/2010','MM/DD/YYYY') + 1
    UNION ALL
    select /*+ PARALLEL(det, 2) */  det.journal_entry_line_id, det.accounting_date, det.company_code, det.account_number, det.transaction_id,
    decode(det.transaction_id_type, 'CollectionID', 'SettlementGroupID', det.transaction_id_type) transaction_id_type,  det.amount, det.currency_code,
    det.debit_or_credit, det.category, det.subcategory, det.reference1, det.reference1_type, det.reference2, det.reference2_type, det.gl_batch_id, det.marketplace_id,
    det.cost_center, det.gl_product_line, det.location, det.project, det.sales_channel, det.created_by, det.creation_date, det.last_updated_by, det.last_updated_date
    from financials.temp2_sl_snapshot_entries det,financials.account_numbers an
    where det.account_number = an.account_number
    and an.subledger_type in ('C', 'AC')
    ) det
                       where agg.company_code = det.company_code
                       and agg.account_number = substr(det.account_number, 1, 5)
                       and agg.transaction_id = det.transaction_id
                       and agg.transaction_id_type = det.transaction_id_type
    /Execution Plan
    | Id  | Operation                          | Name                         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT                   |                              |    12M|  8012M|       |   541K  (1)| 01:48:21 |        |      |            |
    |   1 |  PX COORDINATOR                    |                              |       |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10005                     |    12M|  8012M|       |   541K  (1)| 01:48:21 |  Q1,05 | P->S | QC (RAND)  |
    |*  3 |    HASH JOIN BUFFERED              |                              |    12M|  8012M|  1098M|   541K  (1)| 01:48:21 |  Q1,05 | PCWP |            |
    |   4 |     PX RECEIVE                     |                              |    35M|  3992M|       |   166K  (2)| 00:33:16 |  Q1,05 | PCWP |            |
    |   5 |      PX SEND HASH                  | :TQ10003                     |    35M|  3992M|       |   166K  (2)| 00:33:16 |  Q1,03 | P->P | HASH       |
    |   6 |       VIEW                         |                              |    35M|  3992M|       |   166K  (2)| 00:33:16 |  Q1,03 | PCWP |            |
    |*  7 |        FILTER                      |                              |       |       |       |            |          |  Q1,03 | PCWC |            |
    |   8 |         HASH GROUP BY              |                              |    35M|  4528M|       |   166K  (2)| 00:33:16 |  Q1,03 | PCWP |            |
    |   9 |          PX RECEIVE                |                              |    35M|  4528M|       |   166K  (2)| 00:33:16 |  Q1,03 | PCWP |            |
    |  10 |           PX SEND HASH             | :TQ10001                     |    35M|  4528M|       |   166K  (2)| 00:33:16 |  Q1,01 | P->P | HASH       |
    |  11 |            HASH GROUP BY           |                              |    35M|  4528M|       |   166K  (2)| 00:33:16 |  Q1,01 | PCWP |            |
    |  12 |             VIEW                   |                              |    35M|  4528M|       |   164K  (1)| 00:33:00 |  Q1,01 | PCWP |            |
    |  13 |              UNION-ALL             |                              |       |       |       |            |          |  Q1,01 | PCWP |            |
    |  14 |               PX BLOCK ITERATOR    |                              |    11 |   539 |       |  1845   (1)| 00:00:23 |  Q1,01 | PCWC |            |
    |* 15 |                TABLE ACCESS FULL   | UNBALANCED_SUBLEDGER_ENTRIES |    11 |   539 |       |  1845   (1)| 00:00:23 |  Q1,01 | PCWP |            |
    |* 16 |               HASH JOIN            |                              |    35M|  2012M|       |   163K  (1)| 00:32:37 |  Q1,01 | PCWP |            |
    |  17 |                BUFFER SORT         |                              |       |       |       |            |          |  Q1,01 | PCWC |            |
    |  18 |                 PX RECEIVE         |                              |    21 |   210 |       |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |  19 |                  PX SEND BROADCAST | :TQ10000                     |    21 |   210 |       |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |* 20 |                   TABLE ACCESS FULL| ACCOUNT_NUMBERS              |    21 |   210 |       |     2   (0)| 00:00:01 |        |      |            |
    |  21 |                PX BLOCK ITERATOR   |                              |    56M|  2701M|       |   162K  (1)| 00:32:35 |  Q1,01 | PCWC |            |
    |  22 |                 TABLE ACCESS FULL  | TEMP2_SL_SNAPSHOT_ENTRIES    |    56M|  2701M|       |   162K  (1)| 00:32:35 |  Q1,01 | PCWP |            |
    |  23 |     PX RECEIVE                     |                              |    35M|    18G|       | 82859   (1)| 00:16:35 |  Q1,05 | PCWP |            |
    |  24 |      PX SEND HASH                  | :TQ10004                     |    35M|    18G|       | 82859   (1)| 00:16:35 |  Q1,04 | P->P | HASH       |
    |  25 |       BUFFER SORT                  |                              |    12M|  8012M|       |            |          |  Q1,04 | PCWP |            |
    |  26 |        VIEW                        |                              |    35M|    18G|       | 82859   (1)| 00:16:35 |  Q1,04 | PCWP |            |
    |  27 |         UNION-ALL                  |                              |       |       |       |            |          |  Q1,04 | PCWP |            |
    |  28 |          PX BLOCK ITERATOR         |                              |    11 |  2255 |       |   923   (1)| 00:00:12 |  Q1,04 | PCWC |            |
    |* 29 |           TABLE ACCESS FULL        | UNBALANCED_SUBLEDGER_ENTRIES |    11 |  2255 |       |   923   (1)| 00:00:12 |  Q1,04 | PCWP |            |
    |* 30 |          HASH JOIN                 |                              |    35M|  7514M|       | 81936   (1)| 00:16:24 |  Q1,04 | PCWP |            |
    |  31 |           PX RECEIVE               |                              |    21 |   210 |       |     2   (0)| 00:00:01 |  Q1,04 | PCWP |            |
    |  32 |            PX SEND BROADCAST       | :TQ10002                     |    21 |   210 |       |     2   (0)| 00:00:01 |  Q1,02 | P->P | BROADCAST  |
    |  33 |             PX BLOCK ITERATOR      |                              |    21 |   210 |       |     2   (0)| 00:00:01 |  Q1,02 | PCWC |            |
    |* 34 |              TABLE ACCESS FULL     | ACCOUNT_NUMBERS              |    21 |   210 |       |     2   (0)| 00:00:01 |  Q1,02 | PCWP |            |
    |  35 |           PX BLOCK ITERATOR        |                              |    56M|    11G|       | 81840   (1)| 00:16:23 |  Q1,04 | PCWC |            |
    |  36 |            TABLE ACCESS FULL       | TEMP2_SL_SNAPSHOT_ENTRIES    |    56M|    11G|       | 81840   (1)| 00:16:23 |  Q1,04 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - access("AGG"."COMPANY_CODE"="DET"."COMPANY_CODE" AND "AGG"."ACCOUNT_NUMBER"=SUBSTR("DET"."ACCOUNT_NUMBER",1,5) AND
                  "AGG"."TRANSACTION_ID"="DET"."TRANSACTION_ID" AND "AGG"."TRANSACTION_ID_TYPE"="DET"."TRANSACTION_ID_TYPE")
       7 - filter(ABS(SUM(SYS_OP_CSR(SYS_OP_MSR(SUM("Z"."AMOUNT"),MIN("Z"."ACCOUNTING_DATE")),0)))>=0.01)
      15 - filter("USE"."ACCOUNTING_DATE"<TO_DATE(' 2010-04-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "USE"."ACCOUNTING_DATE">=TO_DATE('
                  2010-04-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      16 - access("SE"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
      20 - filter("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C')
      29 - filter("ACCOUNTING_DATE"<TO_DATE(' 2010-04-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "ACCOUNTING_DATE">=TO_DATE(' 2010-04-21
                  00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
      30 - access("DET"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
      34 - filter("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C')
                    This query is failing due to TEMP issue (TEMP SPACE out of space)
    My TEMP tablespace is 70GB and no one is using TEMP space while this query is in execution.
    PGA=16 GB.
    What i can see from execution plan is : Two large resultsets AGG (13Million) and DET (135 Million) is being joined HASH JOIN BUFFERED. Which is getting spilled to TEMP space causing TEMP outage.
    Is there any way, i can re-write this query (probably using SUB QUERY FACTORING...WITH CLAUSE) so that reduce two times access to TEMP2_SL_SNAPSHOT_ENTRIES table. TEMP2_SL_SNAPSHOT_ENTRIES is 12 GB non partition table and i cannot use any other filter to restrict rows from this table.

    Adding more information here :
    Inner sub query (Which forms DET-bottom)
    select /*+ PARALLEL(det, 2) */  det.journal_entry_line_id, det.accounting_date, det.company_code, det.account_number, det.transaction_id,
    decode(det.transaction_id_type, 'CollectionID', 'SettlementGroupID', det.transaction_id_type) transaction_id_type,  det.amount, det.currency_code,
    det.debit_or_credit, det.category, det.subcategory, det.reference1, det.reference1_type, det.reference2, det.reference2_type, det.gl_batch_id, det.marketplace_id,
    det.cost_center, det.gl_product_line, det.location, det.project, det.sales_channel, det.created_by, det.creation_date, det.last_updated_by, det.last_updated_date
    from financials.temp2_sl_snapshot_entries det,financials.account_numbers an
    where det.account_number = an.account_number
    and an.subledger_type in ('C', 'AC');
    Plan hash value: 976020246
    | Id  | Operation               | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT        |                           |    35M|  7514M|   163K  (1)| 00:32:47 |        |      |            |
    |   1 |  PX COORDINATOR         |                           |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)   | :TQ10001                  |    35M|  7514M|   163K  (1)| 00:32:47 |  Q1,01 | P->S | QC (RAND)  |
    |*  3 |    HASH JOIN            |                           |    35M|  7514M|   163K  (1)| 00:32:47 |  Q1,01 | PCWP |            |
    |   4 |     BUFFER SORT         |                           |       |       |            |          |  Q1,01 | PCWC |            |
    |   5 |      PX RECEIVE         |                           |    21 |   210 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   6 |       PX SEND BROADCAST | :TQ10000                  |    21 |   210 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |*  7 |        TABLE ACCESS FULL| ACCOUNT_NUMBERS           |    21 |   210 |     2   (0)| 00:00:01 |        |      |            |
    |   8 |     PX BLOCK ITERATOR   |                           |    56M|    11G|   163K  (1)| 00:32:45 |  Q1,01 | PCWC |            |
    |   9 |      TABLE ACCESS FULL  | TEMP2_SL_SNAPSHOT_ENTRIES |    56M|    11G|   163K  (1)| 00:32:45 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - access("DET"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
       7 - filter("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C')
    Statistics
             31  recursive calls
              3  db block gets
        1634444  consistent gets
        1625596  physical reads
            636  redo size
    1803659818  bytes sent via SQL*Net to client
         125054  bytes received via SQL*Net from client
          11331  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
       56645822  rows processedOther sub query (that forms AGG)
         select /*+ PARALLEL(se, 2) */  company_code, substr(se.account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,se.amount,se.accounting_date
         from financials.temp2_sl_snapshot_entries se,financials.account_numbers an
         where se.account_number = an.account_number
         and an.subledger_type in ('C', 'AC');
    Plan hash value: 976020246
    | Id  | Operation               | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT        |                           |    35M|  2012M|   163K  (1)| 00:32:37 |        |      |            |
    |   1 |  PX COORDINATOR         |                           |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)   | :TQ10001                  |    35M|  2012M|   163K  (1)| 00:32:37 |  Q1,01 | P->S | QC (RAND)  |
    |*  3 |    HASH JOIN            |                           |    35M|  2012M|   163K  (1)| 00:32:37 |  Q1,01 | PCWP |            |
    |   4 |     BUFFER SORT         |                           |       |       |            |          |  Q1,01 | PCWC |            |
    |   5 |      PX RECEIVE         |                           |    21 |   210 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |   6 |       PX SEND BROADCAST | :TQ10000                  |    21 |   210 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |*  7 |        TABLE ACCESS FULL| ACCOUNT_NUMBERS           |    21 |   210 |     2   (0)| 00:00:01 |        |      |            |
    |   8 |     PX BLOCK ITERATOR   |                           |    56M|  2701M|   162K  (1)| 00:32:35 |  Q1,01 | PCWC |            |
    |   9 |      TABLE ACCESS FULL  | TEMP2_SL_SNAPSHOT_ENTRIES |    56M|  2701M|   162K  (1)| 00:32:35 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - access("SE"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
       7 - filter("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C')
    Statistics
             31  recursive calls
              3  db block gets
        1634444  consistent gets
        1625596  physical reads
            592  redo size
    1803659818  bytes sent via SQL*Net to client
         125054  bytes received via SQL*Net from client
          11331  SQL*Net roundtrips to/from client
              3  sorts (memory)
              0  sorts (disk)
       56645822  rows processed

  • Single row sub query error

    Hi All
    I'm having the following query and its giving me "single row sub query returns more than one row". I did some research and all i could come across is using IN,ALL,ANY,NOT IN.... with the WHERE clause but not in the CASE statement so can any one please point me in the right direction?
    SELECT ROW_NUMBER () OVER (PARTITION BY traffic_sample_id ORDER BY bin_data_id)
                                                                  AS row_number_1,
           COUNT (bin_data_id) OVER (PARTITION BY traffic_sample_id) AS count_1,
           (CASE
               WHEN days != 2
                  THEN (SELECT traffic_sample_id
                          FROM (SELECT ROW_NUMBER () OVER (PARTITION BY traffic_sample_id ORDER BY bin_data_id)
                                                                            AS rn,
                                       traffic_sample_id
                                  FROM bin_data)
                         WHERE rn <= 48)
               ELSE (SELECT traffic_sample_id
                       FROM bin_data)
            END
           ) traffic_sample_id
      FROM (SELECT ROW_NUMBER () OVER (PARTITION BY t1.traffic_sample_id ORDER BY t1.bin_data_id)
                                                                    AS ROW_NUMBER,
                   COUNT (t1.bin_data_id) OVER (PARTITION BY t1.traffic_sample_id)
                                                                         AS COUNT,
                   t1.traffic_sample_id, t1.bin_data_id, t1.end_intv_time,
                   (  TO_DATE (t2.end_date, 'mmddyy')
                    - TO_DATE (t2.start_date, 'mmddyy')
                   ) AS days
              FROM bin_data t1, traffic_sample t2
             WHERE t1.traffic_sample_id = t2.traffic_sample_id(+))Thanks

    Hi,
    One of these two SELECT statements must be bringing back more thean one row:
                  THEN (SELECT traffic_sample_id
                          FROM (SELECT ROW_NUMBER () OVER (PARTITION BY traffic_sample_id ORDER BY bin_data_id)
                                                                            AS rn,
                                       traffic_sample_id
                                  FROM bin_data)
                         WHERE rn <= 48)
               ELSE (SELECT traffic_sample_id
                       FROM bin_data)

  • How can I tell if my software licence is a Family Pack or Single use?

    I need to work out which licence I have as my daughter has a Mac Book.  Her school provided her with iWork but take it off when she leaves in a month or so.  If I have the Family Pack I can load Pages, Keynote etc, but not if I only have Single use.

    If you still have the box or electronic receipt you could call your closest Apple Store and give them the serial # or model # and they can check it. Do you remember what you paid? Also, there are no longer any Family Packs with Lion, but I see your sig says Snow Leopard.

  • I would like to know how i can create a bell graph with out using sub VIs, the data that i created consists in 500 readings with values of 0 to 100, i calculated the mean value and standard diviation. I hope some one can help me

    I would like to know how i can create a bell graph with out using sub VIs, the data that i created consists in 500 readings with values of 0 to 100, i calculated the mean value and standard diviation. I hope some one can help me

    Here's a quick example I threw together that generates a sort-of-bell-curve shaped data distribution, then performs the binning and plotting.
    -Kevin P.
    Message Edited by Kevin Price on 12-01-2006 02:42 PM
    Attachments:
    Binning example.vi ‏51 KB
    Binning example.png ‏12 KB

  • What is a good size SSD to use in the 2011 MBP's?

    I have heard many people say they buy the 115, 128, 240, 512GB SSD's to use as his or her start up or boot drive...  I know SSD's are faster than HDD's but what is the most logical size SSD to use as a boot disk?  I am wanting to get an SSD and keep music, movies, and other files on my HDD in a data doubler.  Thanks for any input in advance!

    Drawbacks of SSD
    1: Can not be securely erased, the driver won't allow it.
    2: Limited writes (answer to #1)
    3: Can't be used for large file transfer on/off to take advantage of the speed (see #2)
    4: Prohibitive costs, need high speed and high storage for Bootcamp/Triple booting Mac's (OS X, Linux , Windows)
    This is the fastest, largest, SDD money can buy (click to see)
    A rough in the head calculation with the drive above, provided it had a external drive just as fast to offload data (no bottlenecks), could offload 500GB of data in about 15 minutes verses a hour with a 7,200 RPM drive.
    One drive is $1800 and the other is $100-$200.
    One is not securely erasable and the other is.
    One has limited writes and the other has no practical limit.
    One would have to have pretty deep pockects to throw away $1800 SSD drives like tissue paper to utilize their awesome speed for the only practical thing it's good for, transfering huge amounts of files quickly.

  • Is there a file size limitation to using this service?

    I am working on a large PDF file (26 mb) and I need to re-size the original and mess with the margins. I don't beleive there is an easy way to do this in Adobe Acrobat 6.0 Pro. It sounds like I have to convert the file back to a Word document, do the adjustments there and then produce a new PDF. I have two questions:
    Is there a file size limitation to using this service?
    Will a PDF to Word doc. conversion mantain the format of the orginal PDF?
    Thanks
    Tim

    Good day Tim,
    There is a 100MB file size limitation for submitting files to the ExportPDF service.  As for the quality of the conversion, from our FAQ entitled Will Adobe ExportPDF convert both text and formatting information?:
    Adobe ExportPDF is capable of exporting high quality information, but the quality of your Word or Excel document depends on the quality of the PDF file you start with. For instance, if your PDF file was originally authored in Microsoft Word or Excel and converted to PDF using the PDFMaker functionality of Adobe Acrobat®, your PDF file contains a rich set of information that can be captured by Adobe ExportPDF. This includes relative positioning of tables, images, and even multi-column text, as well as page, paragraph, and font attributes.
    If your PDF file was originally authored using simpler PDF generation methods, such as “print to PDF” or “scan to PDF” options, Adobe ExportPDF will convert any recognizable text and then use sophisticated conversion intelligence to preserve as much of the page layout as possible.
    Please let us know if you have any other questions!
    Kind regards,
    David

  • How to Increase the retreving size of instances using PAPI filters.

    Hi,
    How to Increase the retreving size of instances using PAPI filters.
    In my engine database instance size exceeds 2500 then we are getting following exception.
    If we login in to user workspace able to see the instances but while trying to retrieve from PAPI getting below exception and showing the user's inbox aize as 0.
    In Process Admin console we set all the required parameters.
    Still I m getting the same problem.
    Can you please lgive mev the solution.
    <Mar 23, 2010 8:58:24 PM SGT> <Warning> <RMI> <BEA-080003> <RuntimeException thrown by rmi server: fuego.ejbengine.EJBProcessControl_1zamnl_EOImpl.getInstancesByFilter(Lfuego.papi.impl.j2ee.EJBSecureEngineInfo;Ljava.lang.String;Lfuego.papi.Filter;)
    java.lang.ClassCastException: cannot assign instance of java.util.HashSet to field fuego.view.FilterImpl.attributes of type java.util.List in instance of fuego.view.FilterImpl.
    java.lang.ClassCastException: cannot assign instance of java.util.HashSet to field fuego.view.FilterImpl.attributes of type java.util.List in instance of fuego.view.FilterImpl
         at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2032)
         at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1212)
         at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1953)
         at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
         at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1753)
    Regards,
    Bharath.
    Edited by: bg57295 on Mar 24, 2010 6:45 PM

    Hi Bharath,
    Believe me, you have an incompatibility between different build#.
    PAPI has an instance cache. When certain process has more instances than the maximum specified, the cache is switch to status OPEN. That means, that PAPI will not be able to resolve some instance queries using the information in the cache. When that occurs, PAPI forward all those queries to the engine.
    The incompatibility introduced is in the communication between PAPI and Engine. So, you only get the exception when you have more instances than the maximum cache size.
    Regards,
    Ariel

  • How to Append two  word documents into single  using   java

    How to Append two word documents into single using java
    we tried this but it's not append the one word document to other
    source code:public class AppendTwoWordFiles {
         public static void main(String []arg)throws IOException
              FileInputStream fi=null;
              FileOutputStream fo=null;
              try {
                   System.out.println("Enter the source file name u want to append");
                   BufferedReader br=new BufferedReader(new InputStreamReader(System.in));
                   File f1=new File(br.readLine().toString());
                   System.out.println("Enter the Destination file name ");
                   File f2=new File(br.readLine().toString());
                   fi = new FileInputStream(f1);
                   fo = new FileOutputStream(f2,true);
                   byte b[]=new byte[2];
                   while((fi.read(b))!=-1);
              fo.write(b);
    System.out.println("Successfully append the file");
              } catch (FileNotFoundException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
              } catch (IOException e) {
                   // TODO Auto-generated catch block
                   e.printStackTrace();
              finally{
              fi.close();
              fo.close();
    plz reply me quickly ,,,what can i follow

    Use this code ..
    and give the path of the both file like this.....
    source file ---- C:/workspace/Practice/src/com/moksha/ws/test/practice.text
    destination file ---- C:/workspace/City/src/com/moksha/ws/test/practice1.text
    import java.io.*;
    public class AppendTwoWordFiles {
         public static void main(String[] arg) throws IOException {
              FileInputStream fi = null;
              FileOutputStream fo = null;
              try {
                   System.out.println("Enter the source file name u want to append");
                   BufferedReader br = new BufferedReader(new InputStreamReader(
                             System.in));
                   File f1 = new File(br.readLine().toString());
                   System.out.println("Enter the Destination file name ");
                   File f2 = new File(br.readLine().toString());
                   fi = new FileInputStream(f1);
                   fo = new FileOutputStream(f2, true);
                   byte b[] = new byte[2];
                   int len = 0;
                   while ((len = fi.read(b)) > 0) {
                        fo.write(b, 0, len);
                   System.out.println("Successfully append the file");
              } catch (FileNotFoundException e) {
                   e.printStackTrace();
              } catch (IOException e) {
                   e.printStackTrace();
              } finally {
                   fi.close();
                   fo.close();
    }

  • ClassCastException using Subant and wldeploy ant task

    Hi!
    I'm using subant to call all diffrent build.xml files located in subdirectories. The buildfile looks like this:
    <project name="extern.call" default="callall">
         <target name="callall">
              <fileset      id="buildfile.set" dir=".." includes="*2/build.xml">
                   <exclude name="Br*2/*"/>
              </fileset>
              <subant target="deploy-local" inheritall ="false" failonerror="true">
                   <fileset      refid="buildfile.set"/>
              </subant>
         </target>
    </project>
    The first called build.xml files works fine ... but the execution of the second build.xml (it's not important which file is the second one, it's crash always at the second call), stop with a "java.lang.ClassCastException".
    See Stacktrace:
    [subant] weblogic.Deployer -debug -nowait -verbose -upload -noexit -name ClarifyRead -source \build\ClarifyRead\delivery\ClarifyRead.ear -targets myserver -adminurl t3://localhost:7001 -user weblogic -password ******** -deploy
    [subant] dumping Exception stack
    [subant] java.lang.ClassCastException
    [subant] at weblogic.management.deploy.utils.DeployerHelper.uploadSource(DeployerHelper.java:586)
    [subant] at weblogic.Deployer.runBodyWithAuthenticatedSubject(Deployer.java:824)
    [subant] at weblogic.Deployer.runBody(Deployer.java:711)
    [subant] at weblogic.utils.compiler.Tool.run(Tool.java:146)
    [subant] at weblogic.utils.compiler.Tool.run(Tool.java:103)
    [subant] at weblogic.Deployer.runMain(Deployer.java:566)
    [subant] at weblogic.Deployer.mainWithExceptions(Deployer.java:576)
    [subant] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [subant] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [subant] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [subant] at java.lang.reflect.Method.invoke(Method.java:324)
    [subant] at weblogic.ant.taskdefs.management.WLDeploy.invokeMain(WLDeploy.java:264)
    [subant] at weblogic.ant.taskdefs.management.WLDeploy.execute(WLDeploy.java:204)
    [subant] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:269)
    [subant] at org.apache.tools.ant.Task.perform(Task.java:364)
    [subant] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:65)
    [subant] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:269)
    [subant] at org.apache.tools.ant.Task.perform(Task.java:364)
    [subant] at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:340)
    [subant] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:269)
    [subant] at org.apache.tools.ant.Task.perform(Task.java:364)
    [subant] at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:65)
    [subant] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:269)
    [subant] at org.apache.tools.ant.Task.perform(Task.java:364)
    [subant] at org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:340)
    [subant] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:269)
    [subant] at org.apache.tools.ant.Task.perform(Task.java:364)
    [subant] at org.apache.tools.ant.Target.execute(Target.java:301)
    [subant] at org.apache.tools.ant.Target.performTasks(Target.java:328)
    [subant] at org.apache.tools.ant.Project.executeTarget(Project.java:1215)
    [subant] at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:383)
    [subant] at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:182)
    [subant] at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:112)
    [subant] at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:269)
    [subant] at org.apache.tools.ant.Task.perform(Task.java:364)
    [subant] at org.apache.tools.ant.Target.execute(Target.java:301)
    [subant] at org.apache.tools.ant.Target.performTasks(Target.java:328)
    [subant] at org.apache.tools.ant.Project.executeTarget(Project.java:1215)
    [subant] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:379)
    [subant] at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:135)
    There is no diffrent between using ant in eclipse environment or as standalone, the result it the same.
    Do someone else have the same problem?
    Kind regards
    Joseph

    Hi
    I got the same message and I couldn't resolve it too, I am wordering if you got the key to the problem?
    Thanks
    Daivd Huang

  • Function module to find out DATA BASE size, free space, used size

    Is there any function module to find out DATA BASE , free space, used size
    FM that gives all the details of the Date base
    what data base, what is the size, free space, used space etc...
    instead of writing case by case for each data base. based on  CASE SY-DBSYS.

    Hi,
    Check this FM:
    DB02_ORA_SELECT_DBA_SEGMENT
    alternatively u can check the tcode: DB02
    thanks|
    Mahesh

  • Can Single Use License be installed on both desktop and labtop

    Hi,
    I purchased a single use lincese Leopard OS 10.5 last year to upgrade my Mac desktop. I also have an iBook with 10.3.9 (Panther)that I haven't used for years. Recently I tried upgrading my ibook to 10.5 but the disc kept running and running for hours without installing anything. All I see was a bright white screen with a running circle in the middle. Is single use upgrade disc only good for one upgrade?
    my iBook has 1.33 GHz, PowerPC G4, 256 built-in
    Please help, thank you.

    SRQP
    Welcome to Apple Discussions.
    Is single use upgrade disc only good for one upgrade?
    The Apple Software License Agreement allows you to use an installer to install to one computer. So long as the software is installed on one computer it may not legally be installed on a second computer.
    cornelius

Maybe you are looking for