Killing a cube maintenance job

Hi,
We have a query on the handling of cube maintenance jobs:
If, for some issues with the machine, we kill the AWM application process using the Windows Task Manager Utility while its running a cube build, then the connection to the AW is still active and the Analytic Workspace still remains connected in the Read-Write mode. So the impact is that if we were to restart the cube maintenance job again by re-attaching the AW, after having killed it, then we need to in fact restart the DB instance so that the hanging connection to the AW is done away with.
Is there any way to "completely" kill this connection without having to restart the DB instance?
(The AWM version is use is 10.2.0.3.0A)
Thanks,
Piyush

Something like this will work. You need to run it with sufficient privilege to access the v$session and v$aw_olap tables (sys?) as well as the alter system kill command.
CREATE OR REPLACE PROCEDURE sess_killer(awowner IN VARCHAR2,   awname IN VARCHAR2) AS
mysid NUMBER;
serial NUMBER;
CURSOR c1 IS
SELECT sid,
  serial#
FROM v$session s,
  v$aw_olap a,
  dba_aws d
WHERE d.owner = awowner
AND d.aw_name = awname
AND d.aw_number = a.aw_number
AND a.session_id = s.sid;
sql_stmt VARCHAR2(200);
session_marked
EXCEPTION;
pragma exception_init(session_marked,   -31);
BEGIN
  OPEN c1;
  LOOP
    FETCH c1
    INTO mysid,
      serial;
    EXIT
  WHEN c1 % NOTFOUND;
  --DBMS_OUTPUT.PUT_LINE('Session: ' || mysid || ', ' || serial);
  sql_stmt := 'ALTER system kill SESSION ''' || mysid || ', ' || serial || '''';
-- DBMS_OUTPUT.PUT_LINE(sql_stmt);
  BEGIN
    EXECUTE IMMEDIATE sql_stmt;
  EXCEPTION
  WHEN session_marked THEN
    NULL;
  END;
END LOOP;
CLOSE c1;
END sess_killer;

Similar Messages

  • No new data in cube after a incoming maintenance job

    Hi,
    I thought I had read about this topic before, so I tried searching the forum but I didn't find any relevant threads.
    Anyways,
    I have a cube(partitioned along month) that I want to add new data to every day. I maintain the cube using AWM and accept all the default values for the maintenance job except for in the "cube data processing options" where I select "aggregate the cube for only the incoming data values".
    But after I do this, none of the new data is shown.
    I have mapped the cube to a view where I only have the data from the last couple of days to limit the load. When I do a select count against the view i get the result 150961 rows. When I check the xml_load_log I see this: Processed 150961 Records. Rejected 0 Records.
    Some of the data in the view already exists in the cube, and some are new and its the new data that's not there.
    I used OX to check the prt_topvar and the data isn't here either.
    When I do the maintenance it takes about 15-20 minutes so everything seems normal.
    Anyone have any ideas what might be wrong? any suggestions would be appreciated.
    regards Ragnar
    edit: I just did a similar maintenance, but this time I limited the view to only new values and the time it took was just 15 seconds from it started the load of measures and until it was done with the solve.
    Message was edited by:
    rhaug

    Once again, thank you for your reply.
    Regarding the 10.2.0.3A patch: Unfortunately the only access I have to the server is with developertools like AWM, OWB, sqlplus etc. so I'm not able to check out whats located in the home of the DB(I guess this is where the executable objects would be). Whenever support wants me to apply a new patch I have to ask my DBA, but he is on vacation at the moment and that makes it a bit more difficult. Hopefully he still has the log file and I can check it out once he gets back. So for now I'm not doing anything more with checking if the patch is properly installed.
    Regarding maintenance of incoming values: I found a nice workaround for this one.
    To describe a little more how it was when it wouldn't work. The cube is partitioned along months in the timedimension, and for every new day I would add that level in the timedimension as well. If I was to create a report today, the highest date I would see would be wed July 18. Then after a new maintenance of incoming values tomorrow, the highest value would be thu July 19, and so on. The logic in the source views for this works perfectly for me, but would easily be a source for error as well if you are not 100% sure on how this would work for you.
    What I had to do in order to make the cube load the incoming values: On my first initial load of the cube, I loaded every day up until 31-12-2007. After I did this maintenance of incoming data works fine, and I suspect it will throughout the year. This might be the way it should be as well, I'm not sure. Seems like you have to include all the days in the partition when you do a full aggregation in order for the maintenance of the incoming values to be updated. What I noticed was that it would update days that was already partially loaded, but not days that was added from the incoming maintenance.
    Hope this made some sense to read.
    regards Ragnar

  • Workflow for Preventive Maintenance Job Approval

    I need a workflow for Preventive Maintenance Job Approval.
    Whenever a PM user issues for a Preventive Maintenance Job it should go for approval up to two level of authority.
    Can some one suggest some standard tasks or workflow template which will be useful for the purpose?
    Regards,
    Manas

    Hi Manas Santra,
    I dont think that there is a standard workflow is available for preventive maintanance.
    Thanks and Regards
    Balaji E.

  • How to take list of preventive maintenance jobs for every month- kindly hel

    dear friends,
    kindly help -how to take the list of preventive maintenance jobs for month basis.
    kindly provide me T-CODES.
    regards,
    g.v.shivakkumar

    Shivakumar,
    Use transaction "IW38" to select all PM orders for the month. The selection can be based on specific "order type", Status ", "Equipment"/ "Functional Location" or " "maintenance plan" with period interms of date.
    Hope this helps...Reward your points and close the thread.
    Regards,
    Prasobh

  • Problem with SQL Agent and Maintenance Job

    Heloo All!
    We planned to create maintenance job for daily backup database. But we got a error when we trying to create maintenance job, error -
    An OLE DB error 0x80004005 ( Client unable to establish connection) occured while enumerating packages. A SQL statement was issued and failed).
    Did anybody solve this problem? 
    Also we founded that SQL Server Agent status - stopped. 
    I trying to start Agent, and got error:
    TITLE: Microsoft SQL Server Management Studio
    Unable to start service SQLAGENT$server_name on server server_name. (mscorlib)
    ADDITIONAL INFORMATION:
    The SQLAGENT$server_name  service on server_name started and then stopped. (ObjectExplorer)
    If anybody solve this problems, please request me!
    Thanks, in advance!
    Best regards,
    Ravil!

    All replies 
    Thanks for links
    I started Agent by your first link. But after starting Agent service, service stops immediately. 
    And after restarting service i got this error:
    The SQL Server Agent service on Local Computer started and then stopped. Some Services stop automatically if they 
    are not in use by other services or programs.
    Olaf Helper
    We are using MSSQL 2008 R2 Standart Edition x64
    OS - Windows Server 2008 R2 x64

  • Maintenance jobs taking longer and longer

    On 2 of our GroupWise servers the weekly maintenance jobs are taking much longer to run.
    What used to finish before I got in at 7:30am is now still running at 4PM right now on one of the servers. Last monday, they finished by noon.
    We are running GroupWise 8.03 on Netware.
    The two servers have about 300 Gig of data each.
    The issue of maintenance jobs running later started a few months ago, but today is the worst.
    Should I increase the GWWorker threads from to something higher? I think I may want to push the startup times earlier too.
    Other ideas?
    thanks
    Phil J

    Guess I posted prematurely. Looking closer I realized there was a select happening during this process against a text column without an index. The slowdown was just the increasing cost of looping through the the entire dataset looking at strings that often shared a fairly sizable starting substring. Chalk another problem up to the importance of appropriate indexes in your db!

  • Unable  to kill or cancel the job in source system

    Hi gurus,
       got a error , while loading delta for line item ods in AR.
      stating as the job is not ended in the source system.'
      i find the job in the source system and it is active.
       i tried to kill that job from sm37, its not happening,
    and tried to cancel from sm50 even its not happening.
    kindly suggest how to cancel or kill that job.
    its urgent plz.
    pramod

    Hi,
            Check If u r system is running on more than 1 server...kill your job with proper appropriate server in tcode SM51.And also after killing the job come to SM37 check that particular job, nd on top menubar,u will 'check job status'.Then do refresh.
    Hope this helpful to you.
    Regards,
    Praveena.

  • Cube maintenance time increases every time it runs

    Hi,
    I have created a cube in AWM (10.1.0.4) with five dimensions. This cube is set to not to preaggregate any levels in order to have a short maintenance time. First time I maintained the cube it took about 3 minutes. In order to get a average maintenance time i chose to maintain the cube a few times more. For every time I did this, the cube used a bit more time to maintain and last time it used 20 minutes. Every time i checked the "aggregate the full cube" option and no data is added to the source tables.
    I also checked the tablespace that the AW is stored in and it also increases for each run. Its now using 1,6 gb.
    The database version is 10.1.0.4.0
    Anyone have any ideas to what I can do?
    Regards
    Ragnar
    edit: I did a few more runs and the last run time was 40 minutes and the tablespace is now using 4,1gb so I think this is where the problem is. Instead of overwriting the old data it seems that it just adds it, making the tablespace bigger for each time.
    Message was edited by:
    rhaug

    Hi,
    seems like I have resolved this problem now. I had made several cubes that were almost identical. Only difference was how they aggregated the data. One was full aggregation, one was skip-level aggregation and the last didn't have any. The reason I did this was to be able to compare maintenance time and see how this affected the response time to the cube. I am not sure what causes this, but I never managed to aggregate the cubes correctly. The cube with full aggregation took just about a minute or two to maintain and when i chose to view the data it took another minute.
    So my impression was that it was aggregating all the data runtime.
    When I tried to maintain any of the cubes after this, I got some different errors. Usually the maintenance failed when the tablespaces couldn't grow anymore. The temp tablespace was at this point beyond 20gb.
    I then thought that the name of the measures in the cube could have something to do with the errors I got, and renamed them so they were unique in the AW. The tablespaces grew large also this time, but the maintenance stopped because of an out of memory error.
    Then i deleted all cubes but one and tried to maintain it. After about 35 minutes it was done and when i chose to view the data, they seemed to be precalculated and the response time was good. The tablespace containing the AW also seems normal in size, around 500mb. I did several test runs during the night, and since yesterday the cube has successfully been maintained 15 times.
    So this brings me to my question.
    Can a AW only contain one cube? Or is it just a user error from my part? To me it seems a bit weird that you only can have one cube using the same dimensions, so I'm not sure if this is a right way of doing it, but it works.
    Anyone have any input or similar experiences?
    Regards
    Ragnar

  • Privilege require to kill a report server Job

    Hi all.
    I'm using Forms / Reports 11g Rel2 on a Windows box.
    What privileges are required to kill a report submitted to a report server (non secure).
    Precisely: if you're on the showjobs page, can anyone who access this page kill a job?
    Also, is it possible to use CANCEL_REPORT_OBJECT (within Forms) for a running job?. Again, which privileges are required?.
    Thanks in advance ....!

    Sure, No problem.
    You have two options actually - forgot to mention the other alternative.
    1.) Oracle HTTP Server (OHS) - this will block any attempt to showjobs based on IP Address/Domain Name/etc....
    That means however, you'll have to firewall off port 9002. But most my customers do this anyway - as reports are commonly accessed via OHS.
    In $ORACLE_INSTANCE/config/OHS/ohs1/moduleconf/reports_ohs.conf, add the bolded text to your conf file. You can do a individual or ranges of IP addresses/hostnames/fully qualified domains. Consult Google or Oracle HTTP Server documentation on deny/allow rule syntax.
    <Location /reports/rwservlet/showjobs>
    SetHandler weblogic-handler
    WebLogicHost hostname
    WebLogicPort 9002
    Order deny,allow
    Deny from all
    Allow from 10.1.1.10
    </Location>
    2.) WLS_REPORTS solution. This lets you password protect the showjobs screen.
    - Locate and backup your reports server's rwserver.conf - i'd make sure its the right one. As a lot of people can get confused as by default Reports comes with 2 reports servers.
    - Open rwserver.conf
    - Add the following parameter in between the "<queue maxQueueSize ..." tag and "<pluginParam name="mailServer" value="%MAILSERVER_NAME%"/>" tag:
    *<identifier encrypted="no">username/password</identifier>*
    - replace username/password for your actual username/password combo you want to use. Save your changes.
    - Reboot the reports server process - if you use a reports server that runs in OPMN, make sure you reboot your OPMN Reports server process as well.
    - when you try to access show jobs, it should return an access deined. To access showjobs. add "?authid=username/password" to the end of your URL to access the page.
    For more info on this: http://docs.oracle.com/cd/E14571_01/bi.1111/b32121/pbr_strt001.htm#sthref63
    Both options arent exactly elegant but they work :)
    Thanks,
    Gavin
    http://pitss.com/us

  • Cube Maintenance

    Hi Experts,
    We have a Sales scenario. R/3 -> DSO -> Cube.
    For a particular date 12.11.2009 for a particular material there are multiple records in the Cube.
    For Eg : DSO = Invoice1 / Material1 / Qty = 50.
    But in the Cube the same record got loaded twice like :-
    Cube: Invoice1 / Material1 / Qty = 50.
              Invoice1 / Material1 / Qty = 50.
    How can I delete this particular record from the cube.
    Can I delete the request from the Cube for 12.11.2009 and activate the request from DSO to Cube.
    What is the feasable solution.
    Thanks
    Kumar.

    HI,
    You can find the request for duplicate records in the cube and compare any one of the request with other records as well, and the entire content is doubled, then you can delete the request.
    Otherwise, you can do a selective deletion for this particular combination in the Infocube and load for the same selection from ODS to cube as a repair full, incase of delta, otherwise normal full load for the selective combination.
    Thanks
    Sat

  • Inventory cube maintenance

    Hi,
    We are using the inventory cube 0IC_C03 for the past 3 years and we have about 100 million records. We have a few reports running off the cube. Now because of the hugh amount of data the query execution takes a very long time approx 20mins.
    Are there any performance tuning measures I can take to speed up the query execution? Please advice.
    Thanks.

    Hi Morpheus;
    You can do the compression manually (Manage InfoCube -> Collapse  and Select the "With zero Elimination" or in the process chain.
    However, performance problems of reports make me raise several questions:
    - Did you check the database performance of tables? (RSRV -> Tests in Transaction RSRV -> All Elementary Tests  -> Database -> Database Information about InfoProvider Tables -> Your Cube..). If you have in one dimension more than 30% that dimension is given you problems. Try to use the maximun number of dimensions possible and define the most heavy as "Line item dimension" (for material for example".
    - Are you using "heavy" characteristics in your cube like material document, sales or purchasing documents? If yes, can't they be remove? The number of records will decrease a lot. You can also track it using DSO.
    - Also, check the number of data that is transfered and added to your cube. (usually when transfered is lower that added you need to know why this appends and figger out a way to this doesn't occur).
    - Check also the SP used. Higher SP allows higher performance in report execution.
    - Finally, check if isn^t possible add more filters in the query and if "fix" filters are in the global area of filters and not in the "local" area.
    Regards;
    Ricardo

  • Missing AW Maintenance Details When Submitting to Oracle Job Queue

    Is there a way to know the number of added/deleted members or the processed/ rejected records of a dimension/cube maintenance that was submitted to the Oracle Job Queue?
    The following is the log of my recent maintenance task that was submitted to the Oracle Job Queue:
    18:59:03 Attached AW OLAP_TEST.AW_TEST in RW Mode.     
    18:58:27 Completed Build(Refresh) of OLAP_TEST.AW_TEST Analytic Workspace.     
    18:58:27 Finished Parallel Processing.     
    18:58:21 Running Jobs: AWXML$_2534_1. Waiting for Tasks to Finish...     
    18:58:21 Started 1 Finished 0 out of 1 Tasks.     
    18:58:21 Running Jobs: AWXML$_2534_1.          Usually, there would be a line in the XML_LOAD_LOG table where the added/deleted members or processed/rejected records can be found. In this case, there is none.

    Usually the entries you want are before these lines (time-wise). The parallel processing only handles aggregating fact partitions in parallel, not loading the base level data.

  • Job cancelled While loading data from DSO to CUBE using DTP

    Hi All,
      while i am loading data frm DSO to CUBE the job load is getting cancelled.
    in the job overview i got the following messages
        SYST: Date 00/01/0000 not expected.
       Job cancelled after system exception ERROR_MESSAGE
    wt can be the reason for this error as i have successfully loaded data into 2 layers of DSO before loading the data into the CUBE

    hi,
    Are u loading the flat file to DSO?
    then check the data in the PSA in the date field and replace with the correct date.
    i think u r using write optimised DSO which is similar to PSA it will take all the data from PSA to DSO.
    so clear the data in PSA and then load to DSO and then to CUBE.
    If u dont need a date field in CUBE then remove the mapping in transformation in cube and activate and make DTP activate and trigger it it will work.

  • Maintance jobs with Maintenance plans - troubleshooting

    Some of our old servers have maintenance jobs created with inbuilt mainteance plans. We find difficulty in troubleshooting these kind of jobs when it fails. Can anyone shed a detailed analysis of how to troubleshoot these jobs which are created with maintenance
    plan like where to look, any system table/view that could help etc. The job history gives very minimal details and often result in cropped off messages.
    I was told that if we alter the maintenace plans associated with the jobs and save it. It will break the job, is it so?

    For SQL server 2000
    You can check the failure logs either from jobs or Maintenance plan history
    In enterprise manager, expand the server group
    Expand the management folder and select the database maintenance plans
    Right click the maintenance plan that failed and select the Maintenance plan history
    Check for the failure occurred and double click the failed row to see more details or on failure
    Similarly from SQL server 2005 onwards maintenance plan history, you can check from maintenance plan
    In SSMS, connect to SQL instance
    Expand the Management folder and select the maintenance plans
    Select the maintenance plan and click on maintenance plan history and check for the failure
    Please click the Mark as answer button and vote as helpful if this reply solves your problem

  • Can I set up a job that kills multiple jobs?

    So, I've got some jobs that access a remote DB, and the link is not terribly reliable. When it hiccups, if a job is running, it hangs forever. Using this forum, I was able to set up a job that is triggered when a job goes over max duration, and it simply kills and restarts the job, it works great for the one job that typically runs long enough to get hit on a daily basis. This morning, I noticed one of the other jobs hung. So, I realize I can easily set up another kill job that gets fired off when this other job hangs, but what I'd really like to do is have one master kill job that can be used to kill any of several jobs so I don't have to have a kill job for each job that might get hung.
    Is there a way to reference, in the PL/SQL block, any of the "tab" fields used in the condition block? I'd like to check tab.user_data.object_name in the PL/SQL block and kill/restart the appropriate job. So, my condition would be a generic event_type of JOB_OVER_MAX_DUR and would fire for any job meeting that condition. Possible? Feasible? Dangerous?

    Thanks for the reply, Tom.
    I am using version 10.2.0.4.
    Point of clarification: I don't want to kill multiple jobs on one execution. I just want to write one job that can handle killing any one of the several jobs that I'm having issues with. So, for example, I have jobs A, B, and C. I want one snipe job that is generically called when anything raises a job_over_max_duration event, and it would figure out which job (A,B, or C) that is stuck and kill it.
    From your reply, it sounds like I could do this in 11g, but probably not in 10g?
    Thanks again!
    ---dale

Maybe you are looking for