Long running clear jobs

Hi, we have been unable to use the BPC Clear and Clear from FACT table packages for many of our clears because they run for too long.    In these situations, we delete the records from the FACT, FAC2 and FACTWB tables using a SQL delete statement and then run a full optimize.   I realize that we could add more selections to clear the data a little bit at a time, but that would take a long time also.
Are there any other options to clear data besides using the clear packages or a SQL delete statement?  One of our consultants mentioned using a stored procedure but I think that is basically the same as a SQL delete statement.   We would prefer that the users run the process instead of IT.   Has anyone come up with other alternatives?
Thank you for any input. 
Ann Florence

Dear Ann Florence,
I think that you should to run "Full Optimize" first. Then, you run the package of Delete Fact. Please you remember that the process of clear data is not a few minute process. And I think you could use SQL Statement to clear Fact Table after you ran "Full Optimize" then clear data in Fact table. You do not forget to run "Full Optimize" again and check a particular report to view some data available.
Kind Regards,
Wandi Sutandi

Similar Messages

  • Alert monitor for long running background jobs

    Hello,
    I have to configure an alert moniter for long running background jobs which are running more than 20000 secs using rule based. I have created a rule based MTE and assigend MTE class CCMS_GET_MTE_BY_CLASS to virtual node but i dont find a node to specify the time.
    could any one guide me how can i do this.
    Thanks,
    Kasi

    Hi *,
    I think the missing bit is where to set the maximum runtime. The runtime is set in the collection method and not the MTE class.
    process:  rz20 --> SAP CCMS Technical Expert Monitors --> All Contexts on local application server --> background --> long-running jobs. Click on 'Jobs over Runtime Limits' then properties, click the methods tab then double click 'CCMS_LONGRUNNING_JOB_COLLECT', in the parameters tab you can then set the maximum runtime.
    If you need to monitor specific jobs, follow the process (http://help.sap.com/saphelp_nw70/helpdata/en/1d/ab3207b610e3408fff44d6b1de15e6/content.htm) to create the rule based monitor, then follow this process to set the runtime.
    Hope this helps.
    Regards,
    Riyaan.
    Edited by: Riyaan Mahri on Oct 22, 2009 5:07 PM
    Edited by: Riyaan Mahri on Oct 22, 2009 5:08 PM

  • Can a long running batch job causing deadlock bring server performance down

    Hi
    I have a customer having a long running batch job (approx 6 hrs), recently we experienced performance issue where the job now taking >12 hrs. The database server is crawling. Looking at the alert.log showing some deadlock,
    The batch job are in fact many parallel child batch job that running at the same time, that would have explain the deadlock.
    Thus, i just wondering any possibility that due to deadlock, can cause the whole server to be crawling, even connect to the database using toad is also getting slow or doing ls -lrt..
    Thanks
    Rgds
    Ung

    Kok Aik wrote:
    According to documentation, complex deadlock can make the job appeared hang & affect throughput, but it didn't mentioned how it will make the whole server to slow down. My initial thought would be the rolling back and reconstruct of CR copy that would have use up the cpu.
    I think your ideas on rolling back, CR construction etc. are good guesses. If you have deadlocks, then you have multiple processes working in the same place in the database at the same time, so there may be other "near-deadlocks" that cause all sorts of interference problems.
    Obviously you could have processes queueing for the same resource for some time without getting into a deadlock.
    You can have a long running update hit a row which was changed by another user after the update started - which woudl cause the long-running update to rollback and start again (Tom Kyte refers to this as 'write consistency' if you want to search his website for a discussion on the topic).
    Once concurrent processes start sliding out of their correct sequences because of a few delays, it's possible for reports that used to run when nothing else was going on suddenly finding themselves running while updates are going on - and doing lots more reads (physical I/O) of the undo tablespace to take blocks a long way back into the past.
    And so on...
    Anyway, according to the customer, the problem seems to be related to the lgpr_size as the problem disappeared after they revert it back to its orignial default value,0. I couldn't figure out what the lgpr_size is - can you explain.
    Thanks
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking" Carl Sagan

  • Long running QueryDocumentProperties job crash

    Hi,
    I have a very simple piece of code using the Plumtree.Remote.PRC namespace, looping thru all document properties and checking for the presence of a specific value. Each time the code crashes around the time I visit about the 700th doc. The whole process takes a little less than 1 sec/doc.
    Am I hitting some timeout problem here ?
    Stack dump below.
    Thanks for any ideas.
    [email protected]
    Plumtree.Remote.PRC.PortalException: Exception of type Plumtree.Remote.PRC.PortalException was thrown. ---> System.Web.Services.Protocols.SoapException: Server was unable to process request. --> Invalid pointer
    at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
    at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
    at com.plumtree.remote.prc.soap.DirectoryAPIService.queryDocumentProperties(String sLoginToken, Int32 nCardID)
    at com.plumtree.remote.prc.soap.DirectoryProcedures.QueryProperties(String sLoginToken, Int32 nCardID)
    --- End of inner exception stack trace ---
    at Plumtree.Remote.PRC.DocumentManagerWrapper.QueryDocumentProperties(Int32 documentID)
    at Mercator.Portal.Scheduler.Jobs.CBrocomAddPropJob.GetDocumentProperty(Int32 docId, String id)
    at Mercator.Portal.Scheduler.Jobs.CBrocomAddPropJob.Mercator.Portal.Framework.Interfaces.IJob.Run(String jobProgId, String parameter)

    Hi Dean,
    We use version 5.0.4.
    I have 2 jobs in fact. One is processing the cards in the order of their objectid, another picks the objectids from a hashtable (hence a different processing order). Both fail at around the same index (700), but not always the same, about a range of +/- 10. If I limit the job to a previously failed card, it does work, so I guess it can not be the data. The number of cards is 786. I ran the job again after completely removing the cards + their deletion history, same result. I'll run the job again with PTSpy on, but I think there was nothing special in it except:
    209 05-20 08:45:19 Warn Plumtree.dll 14176 10924 PTInternalSession.cpp(3184) *** COM exception writing Log Message: IDispatch error #16389 (0x80044205): [Failed to open the log file for writing. log file: C:\Program Files\plumtree\ptportal\5.0\settings\logs\PTMachine.log]2 05-20 08:43:51 Warn Common Library 14176 10924 PTCommon.cpp(977) ***SetError *** (0x80044205): Error while writing message to log file.
    which I do not understand either, the file is there and is perfectly writable.
    I guess I'll try splitting the job in 2 and open a support incident.
    Thanks.
    Michel.

  • Long running background job

    Hi,
              We are facing problem with an MM job, this job some time finishes in few minutes or some times it will be running for a long time so that we should kill the job manually.                                                                               
    Can you please tell me what are the reasons can be for this, and also tell me what are the reasons that a job to be fnished in scheduled is running for a long time and how to find the root cause for this.
    Yours reply is very much appreciated
    Regards
    Balaji Vedagiri

    Hi,
    Please confirm you have enough hardware space.
    Please do a consistency check of the BTC Processing System as follows:
    1. Run Transaction SM65
    2. Select Goto ... Additional tests
    3. select these options: Perform TemSe check
                             Consistency check DB tables
                             List
                             Check profile parameters
                             Check host names
                             Determine no. of jobs in queue
                             All job servers
       and then click execute.
    4. Once you get the results check to see if you have any inconsistencies
       in any of your tables.
    5. If there are any inconsistencies reported then run the "Background
       Procesing Analyses" (SM65 .. Goto ... Additional Tests) again.
       This time check the "Consistency check DB tables" and the
       "Remove Inconsistencies" options.
    6. Run this a couple of times until all inconsistencies are removed from
       the tables.
    Make sure you run this SM65 check when the system is quiet and no other
    batch jobs are running as this would put a lock on the TBTCO table till
    it finishes.  This table may be needed by any other batch job that is
    running or scheduled to run at the time SM65 checks are running.
    Please confirm you are running the following reports daily as per note #48400:
    RSPO0041 (or RSPO1041), RSBTCDEL: To delete old TemSe objects
    RSPO1043 and RSTS0020 for the consistency check.
    Regards,
    Snow

  • Reg: Long running SAP job SAP_XMB_PERF_AGGREGATE

    Hello,
    Above mentioned SAP job is running for days. I tried with changing parameters In integration engine administration SXMB_ADM for Category PERF there is a paramter DAYS_TO_KEEP_DATA.
    Created the Index too in Table SXMSPFAGG. But still the job is taking very long time.
    could any one faced same kind of issue and fixed with different solutions, pls
    FYI - My system -- SAP XI 3.1 SPS23.
    Thanks
    Vivekanandan

    Hi
    Check this link
    http://help.sap.com/saphelp_nw04/helpdata/en/8b/08b140cbe49d2ae10000000a155106/frameset.htm
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/58c5f542-0801-0010-228b-c27a7b3c2752?quicklink=index&overridelayout=true    (Check 48th page onwards).
    Regards
    Ramesh

  • Cancel long running print job

    I'm using the .NET report view component. We have customers with a lot of data and some print jobs may take several minutes to process.
    Is there a way to cancel the processing of the report once I have set the viewer Reportsource?
    Tosch

    Hello Tosch,
    unfortunately there is not .NET viewer event to indicate whether data load process has finished or not.
    This was a possible event to trigger in the ActiveX viewer.
    Maybe you can find somehting on .NET OS level. The  CR .NET viewer has no event to trigger on.
    Please see all possible events in the viewer class [here|https://boc.sdn.sap.com/node/14834].
    Best regards
    Falk

  • Ccms monitoring - Long running job

    I checked in CMMS monitoring (RZ20) version ECC 6.0, there is no option to monitor long running background job.
    How to monitor long running background job using transaction RZ20?

    Hi,
    Check this [link|Re: How to monitor long running background job.;
    Thanks
    Sunny

  • Long running init -- impact on delta

    Hello experts,
          I know that this is something of a basic question, but I'd still like someone to clarify.
    The question that I have is regarding the time stamp that is set for the delta records. I have an initialization job that runs for a long time (let's say 4 hours). My question is if the timestamp for the delta is set at the start of the init job or at the end of it? If there are any transactions that occur during the time the init job is running, are they captured in the delta queue (RSA7) or do they form a part of the init job and the delta marker/timestamp is set at the end of the init job such that the new/changed records get picked up in the queue later on.
    I know that the other alternative is to run an init without data transfer and get the rest of the records in full repair requests. I'd like to know the impact of the long running init job before I choose which way to go.
    Thanks in advance for the responses.

    Hi,
    For the timestamp data sources like FI,Co you cannot maintain the safelty interval delta as this is maintained by SAP.
    The table names are BWOM2_TIMEST for controlling and for FI data sources in BWOM_SETTINGS tables.
    The note 416265 tells you how to change it for CO and 1012874 for FI... but SAP strongly reccomends not to change these settings.
    In case of FI-CO data sources you will not be able to see the data into the delta queue as there delta records are not pushed into delta queue by SAP but are pulled by BW system when the delta is scheduled.
    So even if the delta is created you cannot see it in the delta queue.
    For CO data source as I told you the records which got changed 2 hours before the initialization of the data sources only will come into delta queue and will get picked into delta.
    The same is maintained for FI datasources.
    Thanks
    Ajeet

  • Long running FI close jobs as a result of FI settlement to 3 cost objects

    We are trying to gain some insight as to how we can deal with the large volume of data that is generated with PennDOTu2019s reclassification of costs in SAP (plant maintenance) and any impact this may have on system performance. Specifically, PennDOTu2019s current process requires settlement to 3 cost objects; the SAP standard is settlement to 2 cost objects. There are approximately 60 million line item transactions created each year by this process.
    More details:
    IES u2013 PennDOT Settlement Process 06/24/10
    Business Requirement u2013 settle to three cost objects
    1) Run SAP settlement program u2013 only settles to 2 cost objects
    u2022 Three line items in COEP per G/L posting
    2) Process to settle to the third cost object
    a. Reverse cost object. This generates a CO line item.
    b. Repost to the third cost objects. This generates a CO line item.
    u2022 Six line items in COEP per G/L posting
    3) Assumption: CO line items are a means to post to the third cost item (Cost Center). Only used for internal management reporting.. Not needed for any other processing or federal audit requirements.
    a. These CO line items can be uniquely identified for archiving.
    a. These CO line items can be uniquely identified for archiving. by BA, BT and text fields
    4) The long running monthly and annual close jobs are due to the number of CO line items that are in the system. Estimated at 16 to 20 million line items per year.
    4) The long running monthly and annual close jobs are due to the number of CO line items (COEP) that are in the system. Estimated at 16 to 20 million line items per year. 166,000 items created in one day.

    HI,
    pls chk with this link
    http://www.fhwa.dot.gov/infrastructure/asstmgmt/dipa.pdf
    i am digging ur issue.let me know clearly wat u need.
    thanks,
    karthik

  • Long running table partitioning job

    Dear HANA grus,
    I've just finished table partitioning jobs for CDPOS(change document item) with 4 partitions by hash with 3 columns.
    Total data volumn is around 340GB and the table size was 32GB !!!!!
    (migration job was done without disabling CD, so currently deleting data on the table with RSCDOK99)
    Before partitioning, the data volumn of the table was around 32GB.
    After partitioning, the size has changed to 25GB.
    It took around One and half hour with exclusive lock as mentioned in the HANA adminitration guide.
    (It is QA DB, so less complaints)
    I thought that I might not can do this in the production DB.
    Does anyone hava any idea for accelerating this task?? (This is the fastest DBMS HANA!!!!)
    Or Do you have any plan for online table partitioning functionality??(To HANA Development team)
    Any comments would be appreciate.
    Cheers,
    - Jason

    Jason,
    looks like we're cross talking here...
    What was your rationale to partition the table in the first place?
           => To reduce deleting time of CDPOS            (As I mentioned it was almost 10% quantity of whole Data volume, So I would like to save deleting time of the table from any pros of partitioning table like partitioning pruning)
    Ok, I see where you're coming from, but did you ever try out if your idea would actually work?
    As deletion of data is heavily related with locating the records to be deleted, creating an index would have probably be the better choice.
    Thinking about it... you want to get rid of 10% of your data and in order to speed the overall process up, you decide to move 100% of the data into sets of 25% of the data - equally holding their 25% share of the 10% records to be deleted.
    The deletion then should run along these 4 sets of 25% of data.
    It's surely me, but where is the speedup potential here?
    How many unloads happened during the re-partitioning?
           => It was fully uploaded in the memory before partitioning the table by myself.(from HANA studio)
    I was actually asking about unloads _during_ the re-partitioning process. Check M_CS_UNLOADS for the time frame in question.
    How do the now longer running SQL statements look like?
           => As i mentioned selecting/deleting increased almost twice.
    That's not what I asked.
    Post the SQL statement text that was taking longer.
    What are the three columns you picked for partitioning?
           => mandant, objectclas, tabname(QA has 2 clients and each of them have nearly same rows of the table)
    Why those? Because these are the primary key?
    I wouldn't be surprised if the SQL statements only refer to e.g. MANDT and TABNAME in the WHERE clause.
    In that case the partition pruning cannot work and all partitions have to be searched.
    How did you come up with 4 partitions? Why not 13, 72 or 213?
           => I thought each partitions' size would be 8GB(32GB/4) if they are divided into same size(just simple thought), and 8GB size is almost same size like other largest top20 tables in the HANA DB.
    Alright, so basically that was arbitrary.
    For the last comment of your reply, most people would do partition for their existing large tables to get any benefit of partitioning(just like me). I think your comment can be applied for the newly inserting data.
    Well, not sure what "most people" would do.
    HASH partitioning a large existing table certainly is not an activity that is just triggered off in a production system. Adding partitions to a range partitions table however happens all the time.
    - Lars

  • Re:How to determine the long running jobs in a patch

    Hi ,
    How to determine the long running jobs in a patch .
    Regards

    Hi,
    Check the below MY ORACLE SUPPORT note:
    Note.252422.1 .... Check Completed Long Running Jobs In Oracle Apps.
    Best regards,
    Rafi

  • This is urgent please help!  long running job in sm37 but not in sm66

    Hi,
    Can someone please help for an explanation as to why if you call sm37, you can see that the job is long running (example about 70,000 sec), but when you look at the PID where the job is running in sm66 it does not show 70,000 sec but only around 6,000 sec.
    Can someone please explain why?  Thank you very much

    For background processes, additional information is available for the background job that is currently running. You can only display this information, if you are logged onto the instance where the job is running, or if you choose Settings and deselect Display only abbreviated information, avoid RFC. In any case, the job must still be running.
    Regards
    Anilsai

  • Is there a way to get long running SQL Agent jobs information using powershell?

    Hi All,
    Is there a way to get long running SQL Agent jobs information using powershell for multiple SQL servers in the environment?
    Thanks in Advance.
    --Hunt

    I'm running SQL's to fetch the required details and store it in centralized table. 
    foreach ($svr in get-content "f:\PowerSQL\Input\LongRunningJobsPowerSQLServers.txt"){
    $dt = new-object "System.Data.DataTable"
    $cn = new-object System.Data.SqlClient.SqlConnection "server=$svr;database=master;Integrated Security=sspi"
    $cn.Open()
    $sql = $cn.CreateCommand()
    $sql.CommandText = "SELECT
    @@SERVERNAME servername,
    j.job_id AS 'JobId',
    name AS 'JobName',
    max(start_execution_date) AS 'StartTime',
    max(stop_execution_date)AS 'StopTime',
    max(avgruntimeonsucceed),
    max(DATEDIFF(s,start_execution_date,GETDATE())) AS 'CurrentRunTime',
    max(CASE WHEN stop_execution_date IS NULL THEN
    DATEDIFF(ss,start_execution_date,stop_execution_date) ELSE 0 END) 'ActualRunTime',
    max(CASE
    WHEN stop_execution_date IS NULL THEN 'JobRunning'
    WHEN DATEDIFF(ss,start_execution_date,stop_execution_date)
    > (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-History'
    ELSE 'NormalRunning-History'
    END) 'JobRun',
    max(CASE
    WHEN stop_execution_date IS NULL THEN
    CASE WHEN DATEDIFF(ss,start_execution_date,GETDATE())
    > (AvgRunTimeOnSucceed + AvgRunTimeOnSucceed * .05) THEN 'LongRunning-NOW'
    ELSE 'NormalRunning-NOW'
    END
    ELSE 'JobAlreadyDone'
    END)AS 'JobRunning'
    FROM msdb.dbo.sysjobactivity ja
    INNER JOIN msdb.dbo.sysjobs j ON ja.job_id = j.job_id
    INNER JOIN (
    SELECT job_id,
    AVG
    ((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100)
    +
    STDEV
    ((run_duration/10000 * 3600) + ((run_duration%10000)/100*60) + (run_duration%10000)%100) AS 'AvgRuntimeOnSucceed'
    FROM msdb.dbo.sysjobhistory
    WHERE step_id = 0 AND run_status = 1
    GROUP BY job_id) art
    ON j.job_id = art.job_id
    WHERE
    (stop_execution_date IS NULL and start_execution_date is NOT NULL) OR
    (DATEDIFF(ss,start_execution_date,stop_execution_date) > 60 and DATEDIFF(MINUTE,start_execution_date,GETDATE())>60
    AND
    CAST(LEFT(start_execution_date,11) AS DATETIME) = CAST(LEFT(GETDATE(),11) AS DATETIME) )
    --ORDER BY start_execution_date DESC
    group by j.job_id,name
    $rdr = $sql.ExecuteReader()
    $dt.Load($rdr)
    $cn.Close()
    $dt|out-Datatable
    Write-DataTable -ServerInstance 'test124' -Database "PowerSQL" -TableName "TLOG_JobLongRunning" -Data $dt}
    You can refer the below link to refer out-datatable and write-dataTable function.
    http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/01/use-powershell-to-collect-server-data-and-write-to-sql.aspx
    Once we've the table details, I'm sending one consolidated email to automatically.
    --Prashanth

  • Prgess indicator on long running jobs

    I have an FX application that is directly linked to my database. The program allows all DML operations as well as user defined actions (action commands and various other methods). I have the same application running in Swing, SWT, Canoo ULC and al works just fine. In each of the other front end types, the application automatically displays a busy indicator when a long running job is executed. Now I need this in FX.
    My application is basically a Rich Client framework which allows the same business logic and forms to have different front ends depending on customer requirements. The application is built by customers in a 4GL style development tool. The application is actually built at run time and the data is provided by the user through various services. Because I am building an FX program for a framework, I don't know when the user may execute a long running job when, for example, a button is pressed. I have full control over the retrieval and modification of data but not user interaction. I am therefore looking for a busy indicator that comes automatically when the main thread is waiting.
    Any help would be great!

    Hi guys and thanks for your answers
    I may have stretched the mark with "long running jobs" by these I mean a database query, a price calculation, order process etc. These are standard jobs that are issued on a Rich Client application. Basically I have a screen which will execute a query, and I want to give the user feedback when the query is executing so that he doesn't think that the application has hung. In Swing I have done this by creating my own event queue with a delay timer:
    public class WaitCursorEventQueue extends EventQueue implements DelayTimerCallback
        private final CursorManager cursorManager;
        private final DelayTimer    waitTimer;
        public WaitCursorEventQueue(int delay)
            this.waitTimer = new DelayTimer(this, delay);
            this.cursorManager = new CursorManager(waitTimer);
        public void close()
            waitTimer.quit();
            pop();
        protected void dispatchEvent(AWTEvent event)
            cursorManager.push(event.getSource());
            waitTimer.startTimer();
            try
                super.dispatchEvent(event);
            finally
                waitTimer.stopTimer();
                cursorManager.pop();
        public AWTEvent getNextEvent() throws InterruptedException
            waitTimer.stopTimer(); // started by pop(), this catches modal dialogs
            // closing that do work afterwards
            return super.getNextEvent();
        public void trigger()
            cursorManager.setCursor();
    }I then implemented this into my application like this:
    _eventQueue = new WaitCursorEventQueue(TIMEOUT);
    Toolkit.getDefaultToolkit().getSystemEventQueue().push(_eventQueue);Now each time the application waits for a specific time, the cursor will become a wait timer. This give s the user a visual feedback so that he knows that the application is working and not just hung. By doing this, I do not need to wrap each user callout in a timer. Much easier like this!
    I would like to implement the same in FX.
    Edited by: EntireJ on Dec 15, 2011 12:34 AM

Maybe you are looking for

  • Would like to know how old my system is

    Hi, I just bought my mac, I am having trouble finding things that are compatible with it on the internet, aka downloads, magic jack, and so on. I was at the apple store in Des Moines, Iowa at Jordan Creek Town-center this past week and they told me t

  • Creation of inspection type (mm02 insp.setup) in mass

    Hi All, Is there a way of creating inspection type for all materials via mass update. It is in the Quality management view of mm02. Thanks Venkat

  • Parse from a text file

    Is there a way where i can parse a method, or conditional statement from a text file? Like lets say that i had the statement in Test.txt if(X >= Y) do something }and X and Y are already defined in the java file, is there a way that i could parse it s

  • CS4 Flash. First attempt to save a file creates an unknown error.

    I bought CS4 when I bought my first Mac.  I started building an inventory of production software but had only used Photoshop and Adobe.  I tried to do my first Flash project today and following a tutorial, I tried to save the initial file and it erro

  • Activating Photoshop Elements for Students and Teachers

    Dear community, I am from Switzerland and just bought the Photoshop students version. As the test-license is not anymore valid, i would like to activate my full-access. First I think I have to get my serial number. As I know, I have to send an e-mail