Clear Package Taking 2 Hours for 3000 recs - two filters

Hello All,
        We are running clear package and selecting 3 filters and rest of them are <all> . It is taking 2 hours. Weu2019ve been told by consultants it is normal and efficient to clear out it at SQL level .
I `m wondering what is the best practice out there, if you all clear out via BPC why clear package taking so long.
Thanks,
Saquib

BPC's clear is not real "clear" the data in FACT tables. It is kind of submit opposite value to given intersections to make cell's value to zero. So, sometimes it is taking long time. Based on how big your appliation is.
However, if you do not have reason, take many filters as possbile could help performance. If you put <ALL> to dimension, it is going to calculate all base members of that dimension.
I think most of time in 2 hours is for populating list of base members and calculating current value in SQL server. Appending 3000 records is not taking long time. So, speed of gathering data in SQL server is important for performance. Typical SQL server tunning is always recommened such as IO setup.

Similar Messages

  • Email sends taking an hour for people to receive

    Simple Emails are taking an hour for people to recive.  Any one having these issues, or does anyonw have any ideas?

    If you don't need thumbnails, try iMovie 06.
    With iMovie 06 you can edit your movie instantly after importing it.
    iMovie 06 as a free download to iLife 08 owners.
    http://www.apple.com/support/downloads/imovieHD6.html

  • HT4889 Having issues using Migration Assistant taking 200+ hours for 60 gigs.

    I am transferring from my source which is running Lion to my target which is running Mountain Lion.  It is taking 200+ hours with the ethernet connection.  Is firewire faster? Help!

    Thanks for the reply,
    I didn't understand exactly what to put into AppleScript sorry (never used it before) - so I just ran the following from the Terminal:
    $ echo $UID
    $ sudo chown -R 503 ~
    And restarted - everything works now - thanks heaps.
    What had happened and is this something I did or did Migration Assistant just get confused?
    I have 3 new Mc Pros to setup on Monday so just want to ensure those are hassle free.
    Cheers
    Ben

  • Every 3rd data package taking long time for execution

    Hi Everyone
    We are facing a strange situation. Our scenario involves doing a full load from DSO to CUBE.
    Start routines are not very database intensive and care has been taken to write them in a optimized way.
    But strangely every 3rd data package is taking exceptionally longer time than other data packages.
    a) DTP is having 3 parallal processes.
    b)time spent in extraction , rule, and updation is constant for every data package.
    c)start routine time is larger for every 3rd data package and keeps on increasing. for e.g. 5 mins, 10 mins, 24 mins, 33 mins etc it increases by each 3rd package.
    I tried to anlayze the data which was taking so much time but found no difference in terms of data in normal and longer time taking DTP (i.e. there was not logical difference in data for start routine to behave like this).
    I was wondering what can be the possible reasons for it and may be some other external system factors can be responsible for it. If someone can help in this regard that will be highly appreciated.

    Hi Hemanth,
    In your start routine, are you by any chance adding or multiplying the number of records to the source_package? Something like copy source package into an internal table, add records to internal table and then copy it back to source package? If some logic of this sorts is in your start routine, you need to refresh your internal table. Otherwise, the internal table records goes on increasing with every data package. So, the processing time might increase as the load progresses. This is one common mistake I have seen. Please check your code if you have something like that and refresh the internal tables. See if this makes any difference.
    Thanks and Regards
    Subray Hegde

  • Mac Pro - Imaging Taking 3 Hours for 30Gb

    Hi All,
    I have had an odd issue with a batch of brand new Mac Pros.
    I have setup a 10.5.6 Mac Pro with 30gb of Apps and System to be used as an Image for the rest of the Mac Pros.
    While using Carbon Copy Cloner, Chronosync or Apple Server System Imaging Utility can take 2.5 to 3 hours.
    I really have no idea why!
    After I have cloned the image it will deploy back and the Mac Pros are running very nicely with that configuration, all very fast.
    I have taken the Image and then created an Image of that whilst on an Xserve and that took 10 minutes.
    I am convinced that the Mac Pros are just oddly slow at doing this but cannot understand why.
    I have checked the disks and ran permissions checks, ran the combo updater.
    Anyone got any thoughts?
    Thanks,
    David Lee

    Hyperthreading is a PIA. Try and shut it off because it slows servers down instead of speeding them up. The problem is that the OS spinning off threads and managing processes because it knows what being run can optimize/schedule it. But then you have a chip that doesn't know what running it just see streams of instructions and its trying to optimize them. So its taking something organized and trying to reorganize it so things start getting out of sequence and having to wait. Hyperthreading worked for home computers that generally were only running a couple simple generally non-threaded app's at a time. On my Mac Pro (Early 2008) 8 Core 3.2GHz, 32Gigs RAM, Quadro FX 5600, and no Hyper Threading, I have done similar operations and has taken between 10-20 Minutes.

  • Render time taking 1 hour for 4 seconds

    I am trying to render a 4 second clip of 3d text doing a 360° turn, I export in H.264 and have okay computer specs.
    3.04 GHz Quad core
    12 gigs of ram
    I'm new to this and want to start exporting more video for a project, but I can't do it with this render time.
    Brandon

    LilTunchiB8 wrote:
    What would be a conventional, frame-based codec? Thank you so much for the help!
    Uncompressed AVI is one, but the file sizes are going to surprise you. Quicktime with the PNG codec is usually smaller, but still huge. Quicktime with the PhotoJPEG codec is even smaller. Some popular choices that are even smaller are DNxHD and Cineform (either MOV or AVI). They're still going to be much larger than the H.264, but that's okay!
    For many (most) professionals, the usual workflow is to render a lossless file (or nearly lossless) out of AE and then use the Adobe Media Encoder or Premiere to create the final deliverable. There are several reasons, but one is that AE is not very good at rendering h.264 files (and there are some very technical reasons for this). Anyway, the AE team deprecated the h.264 rendering in the latest version of After Effects because of it.
    In your case, I would suggest rendering into a TIFF sequence (or something similar). This way, if there is a crash or something, you can pick your render up at the last frame it rendered rather than having to start at the beginning. I always do this when I'm doing 3d work and the render is going to be over an hour. Then you can bring that TIFF sequence into Premiere as if it's a video file, put your audio and everything to it, and send your final H.264 render from Premiere.

  • HT204406 What can I do to prevent it taking literally hours for match to talk to iTunes?

    Stop it taking hours to match to communicate with itunes regularly.

    Glad you are up and running OK. You should ALWAYS backup prior to an iOS update.
     Cheers, Tom

  • Skype download for Mac taking 20 hours

    Why is it taking 20 hours for the new version of Skype to download onto my Mac laptop? I have a job interview coming up and I needed to download this because my login would not work on old version. Help!!

    Not to sound fraternising, but the matter of fact is that nothing has been changed in tcp/ip, that is used by Skype.  What has been changed is that Microsoft now manage the code, and with that comes the Microsoft way of doing things. If this includes complete lack of version control and software management, I suggest you do not change one thing.
    What you do not explain and fail to mention is that features that used to work, does not work in your new versions, and the simplest way to get it to work is to remove everything that has been changed.
    What has been changed is how the tcp/ip packets are sent on the Internet. The software that MS acquired used straight forward, standard tcp/ip with non-lingering sockets, which is the ITU-T variant of tcp/ip that is also the US FCC RFC. If Microsoft wants to implement Skype on Netbios, go ahead, but do not expect others to follow, and allow the rest of the world, including those that used to run Skype on Windows, to leave and find the attempt a complete waste of time, and the software utter worthless.
    It is very well defined what Skype is expected to deliver of functionality, just now, it is to perform just as good as two years ago. If you cannot acheive this by Windows used on servers on the network, that is a change that removed functionality, and did not add anything. It is simple to get it back - drop the Windows relay servers. Microsoft cannot rewrite international standards - and do not tell us to endorse any attempt of that nature. It would be nice if the world was a cube, but all your effort in telling us that it is convenient does not change facts: it will be round as a bowling ball, and tcp/ip has to be fully standard, not one that keeps open long after the connection has been taken down to allow MS to implement IPX/SPX functionality in Skype. We are not interested, and nobody is, I doubt Mr. Nadella would endorse this had he known.

  • Query taking long time for EXTRACTING the data more than 24 hours

    Hi ,
    Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
    SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
    2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
    to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
    to_char(nvl(i.payment_due_date,
    to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
    due_date, ah.current_balance-ah.previous_balance amount,
    decode(ah.invoice_id,null,'A','I') transaction_type
    3 4 5 6 7 8 from account a,account_history ah,invoice i_+
    where a.account_id=ah.account_id
    and a.account_type_id=1000002
    and round(a.account_balance,2) > 0
    and (ah.invoice_id is not null or ah.adjustment_id is not null)
    and ah.CURRENT_BALANCE > ah.previous_balance
    and ah.invoice_id=i.invoice_id(+)
    AND a.account_balance > 0
    order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
    | 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
    | 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
    |* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
    |* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
    |* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
    |* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
    | 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
    Predicate Information (identified by operation id):
    2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
    3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
    ROUND("A"."ACCOUNT_BALANCE",2)>0)
    4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
    5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
    IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
    22 rows selected.
    Index Details:+_
    SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
    2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
    INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
    OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
    OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
    OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
    OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
    OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
    OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
    OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
    32 rows selected.
    Regards,
    Bathula
    Oracle-DBA

    I have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
    Also, you do not need two lines for these conditions:
    and round(a.account_balance, 2) > 0
    AND a.account_balance > 0
    You can just use: and a.account_balance >= 0.005
    So the formatted query isselect a.account_id,
           round(a.account_balance, 2) account_balance,
           nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
           to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
           to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
                   'DD-MON-YYYY') due_date,
           ah.current_balance - ah.previous_balance amount,
           decode(ah.invoice_id, null, 'A', 'I') transaction_type
      from account a, account_history ah, invoice i
    where a.account_id = ah.account_id
       and a.account_type_id = 1000002
       and (ah.invoice_id is not null or ah.adjustment_id is not null)
       and ah.CURRENT_BALANCE > ah.previous_balance
       and ah.invoice_id = i.invoice_id(+)
       AND a.account_balance >= .005
    order by a.account_id, ah.effective_start_date desc;You will probably want to select:
    1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
    2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
    3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
    Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
    create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
    create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
    Try the query after creating these indexes.
    A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
    alter session set sort_area_size = 2147483647;
    alter session set hash_area_size = 2147483647;

  • Procedure is taking more than 25 hours for execution

    Hi,
    The below procedure is taking more tahn 25 hours for execution.
    The Table CA.CR_L_D is having around 15 crores records.
    Please suggest me the valuable tips to reduce the the execution time
    CREATE OR REPLACE PROCEDURE NM.L_de_pro
    IS
    Type W_Bk_1 Is Table of Number Index By Pls_Integer;
    Type W_Bk_2 Is Table of W_Bk_1 Index By Pls_Integer;
    Type W_Bk_Ct Is Table of W_Bk_2 Index By Pls_Integer;
    Type Lo_Ac Is Table of Number Index By Pls_Integer;
    Type Lo_Ac_2 Is Table of Lo_Ac Index By Pls_Integer;
    Wo_BK_Co W_Bk_Ct;
    L_L_Ac Lo_Ac_2;
    Begin
    Delete From NM.L_WO_C_B;
    For Sim in 1..10 Loop
    For j in 1..17 Loop
    Select /* + FIRST_ROWS */ CS.LAL+CS.LNAL
    BULK COLLECT INTO Wo_BK_Co(Sim)(j)
    from CA.CR_L_D CS, NM.CR_L_D_PD PD
    Where CS.INS = PD.INS_NBR
    and PD.C_B_N <> j
    and CS.Sc = Sim;
    End Loop;
    End Loop;
    For Sim in 1..10 Loop
    For j in 1..17 Loop
    L_L_Ac(Sim)(j) := 0;
    For i in 1..Wo_BK_Co(Sim)(j).Last Loop
    L_L_Ac(Sim)(j) := L_L_Ac(Sim)(j) + Wo_BK_Co(Sim)(j)(i);
    End Loop;
    --DBMS_Output.Put_Line(L_L_Ac(Sim)(j));
    End Loop;
    Insert Into NM.L_WO_C_B
    (Sc, W_Bk_1, W_Bk_2, WO_Bkt_3, WO_Bkt_4,
    WO_Bkt_5, WO_Bkt_6, WO_Bkt_7, WO_Bkt_8, WO_Bkt_9, W_Bk_10, W_Bk_11, W_Bk_12, W_Bk_13,
    W_Bk_14, W_Bk_15, W_Bk_16, W_Bk_17)
    Select Sim, L_L_Ac(Sim)(1), L_L_Ac(Sim)(2), L_L_Ac(Sim)(3),
    L_L_Ac(Sim)(4), L_L_Ac(Sim)(5), L_L_Ac(Sim)(6),
    L_L_Ac(Sim)(7), L_L_Ac(Sim)(8), L_L_Ac(Sim)(9),
    L_L_Ac(Sim)(10), L_L_Ac(Sim)(11), L_L_Ac(Sim)(12),
    L_L_Ac(Sim)(13), L_L_Ac(Sim)(14), L_L_Ac(Sim)(15),
    L_L_Ac(Sim)(16), L_L_Ac(Sim)(17) From Dual;
    Commit;
    End Loop;
    End;
    /

    Well...
    No guarantees and completely untested as I don't have your tables, data or know what indexes you have on the table or even whether I've understood the purpose of what you are trying to do...
    CREATE OR REPLACE PROCEDURE NM.L_de_pro IS
    BEGIN
      INSERT INTO NM.L_WO_C_B
                (Sc
                ,W_Bk_1
                ,W_Bk_2
                ,WO_Bkt_3
                ,WO_Bkt_4
                ,WO_Bkt_5
                ,WO_Bkt_6
                ,WO_Bkt_7
                ,WO_Bkt_8
                ,WO_Bkt_9
                ,W_Bk_10
                ,W_Bk_11
                ,W_Bk_12
                ,W_Bk_13
                ,W_Bk_14
                ,W_Bk_15
                ,W_Bk_16
                ,W_Bk_17)
      WITH sim AS (select rownum sim from dual connect by rownum <= 10)
          ,j   AS (select rownum j from dual connect by rownum <= 17)
      SELECT sim.sim
            ,SUM(DECODE(j.j,1,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,2,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,3,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,4,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,5,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,6,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,7,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,8,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,9,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,10,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,11,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,12,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,13,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,14,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,15,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,16,SUM(cs.lal + cs.lnal)))
            ,SUM(DECODE(j.j,17,SUM(cs.lal + cs.lnal)))
      FROM   CA.CR_L_D CS JOIN NM.CR_L_D_PD PD ON (CS.INS = PD.INS_NBR)
                          JOIN sim ON (CS.SC = sim.sim)
                          JOIN j ON (PD.C_B_N != j.j)
      GROUP BY sim.sim;
      COMMIT;
    END;My understanding is that your PL/SQL code was loading a list of numbers (lal + nlal) into a 2D array (effectively making it a 3D array), then processing that array to add up the list of numbers in each location of the 2D array to store in another 2D array and then looping through that array, inserting the records.
    Hopefully, I've got the same calculation achieved in just SQL. ;)
    Edited by: BluShadow on Oct 3, 2008 9:53 AM
    forgot some commas

  • Clear Package Using Selection File

    I want to have a clear package to not have propmpts and to recieve the selection from the selection file automtically without the user input.  In this way I can schedule the clear.  How is this done??
    Phil

    Hello,
        IF you are looking into the dynamic script, this is the instruction which is taking the parameters from the user interface:
    'prompt for a selection of data to clear
    PROMPT(SELECTINPUT,%SELECTION%,,"Select the members to CLEAR",%DIMS%)
    In case you want to not use this, you can hardcoded anf fill the %SELECTION% variable with your selection.
    This variable is used at the end of dynamic script:
    BEGININFO(%SQLDUMP%)
              select %FACTDIMS%,0 as SIGNEDDATA
              FROM             
               ( SELECT %FACTDIMS%,0 as SIGNEDDATA FROM TBLFACT%APP% WHERE %SELECTION% 
           UNION ALL
              SELECT %FACTDIMS%,0 as SIGNEDDATA FROM TBLFACTWB%APP% WHERE %SELECTION%
              UNION ALL
              SELECT %FACTDIMS%,0 as SIGNEDDATA FROM TBLFAC2%APP% WHERE %SELECTION% 
           ) as ZeroTable
              group by %FACTDIMS% OPTION(MAXDOP 1)
    ENDINFO
    The format of the variable should be:
    "DIM1=value1, DIM2=value2, ..."
    Hope this helps you,
    Mihaela

  • I need to reinstall my original factory disc and reset my computer to the beginning. It currently has lion. Do I need to erase my HD first to clear and make new room for the install disc.

    I need to reinstall my original factory disc and reset my computer to the beginning. It currently has lion. Do I need to erase my HD first to clear and make new room for the install disc.

    Prepare Your Mac for Sale
    Boot from the OS X Installer Disc One that came with the computer.  After the installer loads select your language and click on the Continue button.  When the menu bar appears select Disk Utility from the Utilities menu.  After DU loads select the startup volume from the left side list then click on the Erase tab.  Set the format type to Mac OS Extended (Journaled) then click on the Options button.  Select the one pass Zero Data option and click on the OK button.  Then click on the Erase button.
    Note: You can skip the Zero Data option if you are not concerned about removing sensitive personal data from the hard drive.  If you choose to skip this part of the process then it is possible for others to recover data from the hard drive.  The Zero Data procedure will prevent others from getting access to your personal information.
    This process will take 30 minutes to several hours depending upon the size of the hard drive.  After formatting has completed quit DU and return to the installer.  Now complete the OS X installation.  At the completion of the installation do not restart the computer.  Instead just shut it off.  The next user will be presented with the Setup Assistant when they turn on the computer just as it would if new out of the box.

  • IMac 7,1. Snow Leopard. 2 printers that printed nothing, or more than ±1/4 from page top. Reinstalled OS. Time Machine BU now allows only latest (faulty) files. ±4 series shown, each taking +8 hours to complete. How to reinstall from 'DEVICES' in 'iMac'?

    iMac 7,1. SnowLeopard. 2 printers that printed nothing, or more than ±1/4 from page top.Reinstalled OS. Time Machine BU now allows only latest (faulty) files. ±4series shown, each taking +8 hours to complete. How to reinstall from ‘DEVICES’in ‘iMac’?

    You need to get rid of MacKeeper (Zeobit).  Do not use their uninstaller, follow the instructions here ...
    http://applehelpwriter.com/2011/09/21/how-to-uninstall-mackeeper-malware/
    When that is done there does not appear to be a lot wrong if you correct the red ink entries.
    The 4Gb is sufficient for Yosemite though 6 or 8 Gb would be better.  You may find the download slow so be prepared.

  • My iMac, OS 10.7.5, is abnormally slow on start-up, taking many minutes for my password sign-in window to show up.  I've run Disk Permissions and have verified my hard drive and all is in order.  What do I do to start up my Mac in a normal manner?

    My iMac, OS 10.7.5, is abnormally slow on start-up, taking many minutes for my password sign-in window to show up.  I've run Disk Permissions and have verified my hard drive and all is in order.  What do I do to start up my Mac in a normal manner?

    If you have more than one user account, these instructions must be carried out as an administrator.
    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Step 1
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left. If you don't see that menu, select
    View ▹ Show Log List
    from the menu bar.
    Enter "BOOT_TIME" (without the quotes) in the search box. Note the timestamps of those log messages, which refer to the times when the system was booted. Now clear the search box and scroll back in the log to the last boot time when you had the problem. Select the messages logged after the boot, during the time something abnormal was happening. Copy them to the Clipboard (command-C). Paste into a reply to this message (command-V).
    For example, if the problem is a slow startup taking three minutes, post the messages timestamped within three minutes after the boot time, not before. Please include the BOOT_TIME message at the beginning of the log extract.
    If there are runs of repeated messages, post only one example of each. Don’t post many repetitions of the same message.
    When posting a log extract, be selective. In most cases, a few dozen lines are more than enough.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Important: Some private information, such as your name, may appear in the log. Anonymize before posting.
    Step 2
    Still in Console, look under System Diagnostic Reports for crash or panic logs, and post the entire contents of the most recent one, if any. In the interest of privacy, I suggest you edit out the “Anonymous UUID,” a long string of letters, numbers, and dashes in the header of the report, if present (it may not be.) Please don’t post shutdownStall, spin, or hang logs — they're very long and not helpful.

  • Clear Package Not Working - Data File is Empty

    Hi
    I am trying to run a clear package against a precise section of data. However the package keeps erroring and the log file shows me the below:
    TOTAL STEPS  3
    1. Export_Zero:        completed  in 1 sec.
    2. Load Cube:          Failed  in 1 sec.
    3. Clear:              completed  in 1 sec.
    [Selection]
    ENABLETASK= Yes
    CHECKLCK= Yes
    (Member Selection)
    Category: OPM
    Time: 2011.P01,2011.P02,2011.P03,2011.P04,2011.P05,2011.P06,2011.P07,2011.P08,2011.P09,2011.P10,2011.P11,2011.P12
    Entity: TOT_ENTITY
    AccLabour: WEIGHTED_EFF_ALL2
    DataSrc: INPUT
    Line: FB1
    Machine: TOT_PROCESS
    Product: NOPRODUCT
    WIP: 56088876
    [Messages]
    The data file is empty. Please check the data file and try again.
    I know there is data in this section of the database as I am running a report which looks at the same place and data can be found. From looking at other threads it has been recomended to turn off Data Audit. I've checked this and can confirm that this is switched off for this particular application (in other applications it is swithced on).
    Please can someone advise what else I need to do in order to run this clear package successfully.
    Thanks,
    Jamie

    Thanks Roberto
    Having just read that thread I see that they mention that they are not clearing from a calculated member. I think the member we have been trying to clear from is a calculated one.
    Would a clear from fact table allow us to clear the data against this member? Or would we need to clear the data against the non-calculated members which go into the calculation?
    Kind Regards,
    Jamie
    UPDATE
    I've managed to run the clear package, I was trying getting this error because I was running it against calculated members. When I chnaged it to run against the non-calculated members the package ran successfully and cleared the data values.
    Thanks for your help
    Edited by: jamiet440 on Nov 10, 2011 12:28 PM

Maybe you are looking for