Running SETUP jobs in parallel in Production R/3

Hi All,
I have identified the number ranges(for Billing) for which I have to run SETUP.
Example,I have 5 parameters/Variables defined in 'oli9bw'....1 variable for 1 range in 'oli9bw' and say all are new runs.
How can I run these variables/parameters in Parallel inorder to reduce the downtime of the Production system.??
Do I have to have a ABAP code written to trigger these Variables (all at once)which inturn fills the setup tables.?
or Is there a standard ABAP program where I can mention these Variables and then Kickoff the program?
Note:I already took methods to have enough processes to run these 5 jobs in parallel.
Any help is much appreciated.
Thanks,
Jb

Abhijit,
I have an alternate approach than running SETUP's for history data .since I am afraid that running SETUP's for 80 jobs may take long time in PRODUCTION and also to avoid any complications.
Please let me know if this works
For future data:
1.clear qrfc and RSA7 by running delta twice for Billing DSO,so that Delta queue for 2LIS_13_VDITM is   empty before transporting new structure.
2.send the new structure of 2LIS_13_VDITM to R/3 Production while R/3 Prod is down.
3.Will check the delta queue for this data source and see if this DS is active with new structure.
4.Turn on the Job control so that users can post new documents and then turn back on BW Process chain.
In this way,we dont run the SETUP's for history data,but future forward we have new structure data coming in BW.
In order to capture History:
1.I thought of building generic DS based on a view on VBRK and VBRP with selction on Bill_date = 03/2008 to till date.
2.I suppose view can handle 80 k billing documents
3.then dump the data in to a new DSO (keys in DSO are Bill_doc and Bill_item ,data fields are just say 'xx' and 'yy.....xx and yy are the new fields that has been added to extract structure)')
4. then load as full load to Billing ODS .(Bill_doc and Bill_item are the keys in Billing DSO )
5.In this way ,all the data from Generic DS will over write the Billing DSO with new fields 'xx' and 'yy'
By the way,can we have repair full from DSO to DSO ??So that the deltas won't get disturbed in Billing DSO.
Please let me know if this works
Thanks,
Jb

Similar Messages

  • Running multiple jobs (or parallelism) in non-project mode

    So, I have just converted my GUI-based project mode project to Tcl based non-project mode.
    I have 8 OOC IP modules that I sythesise before the main design using synth_ip. This occurs sequentially, rather than in parallel as it could when I have a run for each module in the GUI. there I would just set the number of jobs to 8 upon launching the run. This was far quicker.
    Without creating runs for each IP, can I implement the same kind of parallellism with my non-project tcl script?

    No. There are sort of two issues.
    The first is that in non-project batch mode there is not (supposed to be) a mechanism for managing files and filesets. Each synthesis run that you want to run has its own set, and there would normally be no way (in non-project mode) of managing this within the Tcl environment. However, in the case of IP, this sort of isn't true - IP are almost by definition little projects, so this problem doesn't really apply here.
    The second is that in non-project mode, there is a single flow of execution - there is no concept of "background jobs" which is what is used in project mode; there is a single thread of Tcl execution that runs linearly. The processes invoked by this thread may use multiple processors (the place_design and route_design processes do), but only one process is running at a time. Furthermore, in non-project batch, there is no eqivalent to fork/join (which is esstially what launch_runs/wait_on_run is).
    So, you have two choices. One is to compile the IP outside of your main Vivado run before you launch your main run - use your OS to launch 8 separate Vivado processes, each of which has its own script to compile one of your IP.
    The other is to compile the IP once, and keep the generated products around from run to run; your IP does not need to be synthesized each time - each synthesis run should end up with exactly the same netlist. You can even revision control the IP directory (with all its targets). This way, during "normal" runs, you skip the IP synthesis entirely and go straight to synthesizing your design.
    Avrum

  • How to run 3 job(a,b,c) parallel in unix shells script and after will complete d will start  and we have to handle the error also

    how to run 3 job(a,b,c) parallel in unix shells script and after will complete d will start  and we have to handle the error also

    032ee1bf-8007-4d76-930e-f77ec0dc7e54 wrote:
    how to run 3 job(a,b,c) parallel in unix shells script and after will complete d will start  and we have to handle the error also
    Please don't overwhelm us with so many details!  
    Just off the top of my head ... as a general approach ... something like
    nohup proca
    nohup procb
    nohup procc
    while (some condition checking that all three procs are still running ... maybe a ps -ef |grep  )
    do
    sleep 2
    done
    procd
    But, we'd really need to know what it is you are really trying to accomplish, instead of your pre-conceived solution.

  • Running the jobs sequentially instead of parallel

    Hi All,
    My script runs multiple jobs by iterating in a loop one by one. They are getting executed in parallel where as i want them to run sequentially one only after another has finished execution. For ensuring this i have added this logic in my code:
    [Code for Job name reading, parameters setting goes here]
    jcsSession.persist();
    while (((infoJob.getStatus().toString().matches("Scheduled")) || (infoJob.getStatus().toString().matches("Running"))) {
      jcsOut.println("infojob still running"+infoJob.getStatus().toString());
    This should run after each job's persist statement.
    Ideally the loop should end as soon as the job attains 'Error' or 'Completed' state i.e. the job ends but when i am running the script
    this while loop is running infinitely thus causing an infinite loop.
    Because of which the first job ends but the script does not move forward to next job.
    Please help me what i am doing wrong here or if there is any other way to make the jobs run sequentially through Redwood.
    Thanks,
    Archana

    Hi Archana,
    How about jcsSession.waitForJob(infoJob);
    Regards,
    HP

  • Running Jobs in Parallel

    Hi There,
    I have two procedures A & B.
    I want to run these two in parallel.
    How can I achieve this ?
    I want to run from Oracle.
    Will DBMS_JOB work here?
    Though two jobs are scheduled at the same time,
    it will run one after the other based on the job_id
    Thanks.

    You can use DBMS_JOB to do that. Submit both procedures into job queue one after the other and when you COMMIT, they would be run at the next available oppourtunity.
    it will run one after the other based on the job_idThat depends upon the parameter job_queue_processes configuration. What is this setting currently?

  • Remove existing Microsoft Office server products and re-run setup

    Hi Folks,
    I would be great if some one help me,
    I have installed sharepoint 2007 in Windows 2008 R2, SQL Server 2008.I was able to access SP2007.
    For some requirement i have uninstall SQL 2008 and SP2007, but i didnt remove SP2007 from server.
    Note:
    But now i have installed new SQL2008 with licence while i install SP2007 Project server i am getting issue like "Remove existing Microsoft Office server products and re-run setup"
    I have removed 12 & 14 hive.
    My Query is: Anything needs to remove from server or what should i do for this issue?
    Thanks,
    Inguru

    Are you trying to install Office Web Apps 2013 on the same server as SharePoint Server 2013? Sorry I could not figured that part from your post. If you are installing OWA on SP 2013 server then it is not possible in SP 2013. You will need to install OWA
    on different server where SharePoint is not installed. Basically a dedicated OWA server(s).
    Check following article out.
    http://technet.microsoft.com/en-us/library/ff431687.aspx
    Amit

  • Setup JOB to run sh script with argument

    Hi all,
    Can anyone share your view and experience on How to setup Job to execute shell script with argument?
    For example : I need to execute /export/home/joel/test.sh 20060921
    20060921 is the argument.
    If I define a program to execute the script, can I use DEFINE_PROGRAM_ARGUMENT to set the argument?
    I am not sure because I have the understanding that it will only work with STORED_PROCEDURE program type.
    How does Oracle Scheduler handle such case?
    I really appreaciate your response.
    Thanks

    Hi,
    The thread above contains information specific to shell scripts. For stored procedures you just set job_type to 'stored_procedure' , number_of_arguments to the number of arguments that you want to pass into the stored procedure and then call set_job_argument_value for each argument before finally enabling the job . So for example
    begin
    dbms_scheduler.create_job(
    job_name=>'j1',
    job_type=>'stored_procedure',
    job_action=>'dbms_output.put_line',
    number_of_arguments=>1);
    dbms_scheduler.set_job_argument_value('j1', 1, 'this is my argument');
    dbms_scheduler.enable('j1');
    end;
    Hope this helps,
    Ravi.

  • Run 5 commands in parallel using scriptblock

    Hi,
    I have an array of 10000 commands that I want to process through an application, but only 5 commands in parallel at the same time. So I will take 5 commands from the array $arrCommands, then run those in parallel, then take another 5 commands..process..and
    so on.. I might have to use "scriptblock" functionality, but with current setup the chance is it will try to run thousands commands in parallel which will actually kill the server. I won't use "Foreach -parallel" because my server running
    Powershell V2.
    Can anyone suggest how can I use scriptblock functionality to receive and process 5 commands in parallel?
    ## Function to process commands
    Function ProcessCommands
    param
    [string]$ParamCommand
    [hashtable]$Return = @{}
        $objProcess = New-Object System.Diagnostics.Process
        $objProcess.StartInfo = New-Object System.Diagnostics.ProcessStartInfo
        $objProcess.StartInfo.FileName = "\\OFCAPPSRVR\apps\calcrun.exe"
        $objProcess.StartInfo.Arguments = $Parameter
        $objProcess.StartInfo.UseShellExecute = $shell
        $objProcess.StartInfo.WindowStyle = 1
    $objProcess.StartInfo.RedirectStandardOutput = $true
    $null = $objProcess.Start()
    $objProcess.WaitForExit()
    $ExitCode = $objProcess.ExitCode
    $StandardOutput = $objProcess.StandardOutput.ReadToEnd()
    $objProcess.Dispose()
    $Return.ExitCode = $ExitCode
    $Return.StandardOutput = $StandardOutput
    Return $Return
    ## Main
    $arrCommands = @(
    "STD -iMXB9010 -o\\OFCAPPSRVR\outputs\MXB9010.pdf",
    "STD -iMXB6570 -o\\OFCAPPSRVR\outputs\MXB6570.pdf",
    "STD -iMXB8010 -o\\OFCAPPSRVR\outputs\MXB8010.pdf",
    "STD -iMXB5090 -o\\OFCAPPSRVR\outputs\MXB5090.pdf",
    "STD -iMXB2440 -o\\OFCAPPSRVR\outputs\MXB2440.pdf",
    "STD -iMXB8440 -o\\OFCAPPSRVR\outputs\MXB8440.pdf"
    foreach ($Command in $arrCommands)
    $Return = ProcessCommands -ParamCommand $Command
    $arrResults += New-Object Psobject -property @{COMMAND=$Command; EXITCODE=$Return.ExitCode; OUTPUT=$Return.StandardOutput}

    I think the method described in this blog is your best bet:
    http://blogs.msdn.com/b/powershell/archive/2011/04/04/scaling-and-queuing-powershell-background-jobs.aspx
    I hope this post has helped!

  • Error ORA-01017 happened when dbms_scheduler run a job.

    Hi All,
    I got a problem when I use dbms_scheduler to run a job. I got Error code 1017 when the job is run by scheduler. Please find my steps below:
    Oracle version is : Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    1. Created a job successfully by using the code below:
    begin
    dbms_scheduler.create_job(
    job_name => 'monthly_refresh_elec_splits',
    job_type => 'PLSQL_BLOCK',
    job_action => 'BEGIN TRADINGANALYSIS.PKG_IM_REPORTING_ERM.REFRESH_ELEC_SPLITS_TEST; commit; END;',
    start_date => SYSTIMESTAMP,
    repeat_interval => 'freq=monthly;bymonthday=25;byhour=10;byminute=35;bysecond=0;',
    end_date => NULL,
    enabled => TRUE,
    comments => 'monthly_refresh_elec_splits.',
    auto_drop => FALSE
    end;
    2. Got the job run details from talbe user_scheduler_job_run_details after the job is finished:
    select * from user_scheduler_job_run_details where job_name = 'MONTHLY_REFRESH_ELEC_SPLITS' order by log_id desc;
    LOG_ID     LOG_DATE     OWNER     JOB_NAME     JOB_SUBNAME     STATUS     ERROR#     REQ_START_DATE     ACTUAL_START_DATE     RUN_DURATION     INSTANCE_ID     SESSION_ID     SLAVE_PID     CPU_USED     DESTINATION     ADDITIONAL_INFO
    2054804     25/06/2012 10:35:01.086000 AM +10:00     TRADINGANALYSIS     MONTHLY_REFRESH_ELEC_SPLITS          FAILED     1017     25/06/2012 10:35:00.300000 AM +10:00     25/06/2012 10:35:00.400000 AM +10:00     +00 00:00:01.000000     1     1025,37017     129396     +00 00:00:00.030000          ORA-01017: invalid username/password; logon denied
    ORA-02063: preceding line from NETS
    ORA-06512: at "TRADINGANALYSIS.PKG_IM_REPORTING_ERM", line 574
    ORA-06512: at line 1
    3. If I run the job directly the job will be finished successfully.
    begin
    dbms_scheduler.run_job('monthly_refresh_elec_splits',TRUE);
    end;
    LOG_ID     LOG_DATE     OWNER     JOB_NAME     JOB_SUBNAME     STATUS     ERROR#     REQ_START_DATE     ACTUAL_START_DATE     RUN_DURATION     INSTANCE_ID     SESSION_ID     SLAVE_PID     CPU_USED     DESTINATION     ADDITIONAL_INFO
    2054835     25/06/2012 11:05:38.515000 AM +10:00     TRADINGANALYSIS     MONTHLY_REFRESH_ELEC_SPLITS          SUCCEEDED     0     25/06/2012 11:04:35.787000 AM +10:00     25/06/2012 11:04:35.787000 AM +10:00     +00 00:01:03.000000     1     1047,700          +00 00:00:00.030000
    Additional Info:
    PL/SQL Code in procedure
    PROCEDURE Refresh_Elec_Splits_Test IS
    BEGIN
    --Refresh im_fact_nets_genvol from v_im_facts_nets_genvol in NETS
    DELETE FROM IM_FACT_NETS_GENVOL;
    --the local NETS_GENVOL table has an additional column providing volume splits by generator and month.
    --INSERT INTO IM_FACT_NETS_GENVOL values ('test',sysdate,'test',1,2,3,4,5,6,7);
    INSERT INTO IM_FACT_NETS_GENVOL
    select ngv.*,
    ratio_to_report (net_mwh) OVER (PARTITION BY settlementmonth, state)
    gen_percent
    from [email protected] ngv;
    commit;
    END;
    Does anyone can advice where should I check and how can I solve the problem?
    Thanks in advance
    Edited by: user13244529 on 24/06/2012 18:33
    Edited by: user13244529 on 24/06/2012 18:43

    I apologize if you already solved this.. but see Metalink ID 790221.1
    +*<Moderator Edit - deleted contents of MOS Doc - pl do NOT post such content - it is a violation of your Support agreement>*+                                                                                                                                                                                                                                                                                                                                                                                                               

  • "Unable to set shared config DC." when running setup /RecoverServer

    Hi guys,
    I'm hoping for a bit of assistance. I have stepped into an environment where there is one production Exchange 2010 server and one Exchange 2013 server. The Exchange 2013 server is in a unrecoverable state however it contains the domain administrators mailbox
    and therefore I cannot remove it manually. As it was a vm, I took it offline, created an identical server (name, os version(Windows 2008 R2 with latest updates), ip address etc.), installed all the prerequisite components and then ran setup /m:RecoverServer
    /IAcceptExchangeServerLicenseTerms. Everything runs through successfully until it gets to the Mailbox role: Transport service where it fails with "Unable to set shared config DC.". After some searching on Google,
    it suggests that ipv6 is disabled on the DC. We have two DC's in our environment and both have ipv6 enabled as does the exchange server. If I try to re-run the installation for the role alone, i.e. Setup /mode:install /IAcceptExchangeServerLicenseTerms
    /role:HubTransport, it fails with "The machine is not configured for installing "BridgeheadRole" Datacenter Role." 
    Any help would be greatly appreciated. 

    Thanks Amit, I understand there's no hubtransport role in 2013 but this is the error that it's throwing:
    PS D:\> .\setup /m:RecoverServer /IAcceptExchangeServerLicenseTerms
    Welcome to Microsoft Exchange Server 2013 Service Pack 1 Unattended Setup
    Copying Files...
    File copy complete. Setup will now collect additional information needed for
    installation.
    Languages
    Mailbox role: Transport service
    Mailbox role: Client Access service
    Mailbox role: Unified Messaging service
    Mailbox role: Mailbox service
    Management tools
    Client Access role: Client Access Front End service
    Client Access role: Front End Transport service
    Performing Microsoft Exchange Server Prerequisite Check
        Configuring Prerequisites                                 COMPLETED
        Prerequisite Analysis                                     FAILED
         A Setup failure previously occurred while installing the HubTransportRole r
    ole. Either run Setup again for just this role, or remove the role using Control
     Panel.
         For more information, visit: http://technet.microsoft.com/library(EXCHG.150
    )/ms.exch.setupreadiness.InstallWatermark.aspx
    The Exchange Server setup operation didn't complete. More details can be found
    in ExchangeSetup.log located in the <SystemDrive>:\ExchangeSetupLogs folder.
    Any other thoughts?

  • Use_current_session = FALSE does not run my job correctly

    We have a custom scheduler that invokes jobs based on schedules/conditions. Until now, the jobs were all kicked off in the same session. Since the record set to be processed is increasing, we want the jobs submitted in parallel.
    So the main job is split into several discrete jobs & run them in different sessions (dbms_scheduler.run_job with use_current_session = FALSE). The programs & jobs get created successfully.
    The program has around 12 arguments defined.
    The jobs run; however error out with "ORA-06502: PL/SQL: numeric or value error ORA-06502: PL/SQL: numeric or value error: character to number conversion error" *(DBA_SCHEDULER_JOB_RUN_DETAILS)*
    If I run the jobs with this parameter = TRUE the jobs run successfully. Any pointers greatly appreciated.
    Here are additional details..
    DB: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
    dba_scheduler_global_attribute
    MAX_JOB_SLAVE_PROCESSES
    LOG_HISTORY 30
    DEFAULT_TIMEZONE US/Pacific
    LAST_OBSERVED_EVENT
    EVENT_EXPIRY_TIME
    CURRENT_OPEN_WINDOW WEEKEND_WINDOW
    v$parameter where name like '%process%'
    processes 150
    gcs_server_processes 0
    db_writer_processes 1
    log_archive_max_processes 2
    job_queue_processes 20
    aq_tm_processes 0
    Thanks
    Kiran.

    Hi,
    This error seems clear,
    character to number conversion error : at "XXA.XXX_ANP_ENGINE_MASTER_PKG", line 24
    This is application code which the scheduler did run but the application code is throwing the error.
    You will have to debug the issue occurring at line 24 of package "XXA.XXX_ANP_ENGINE_MASTER_PKG". You may be relying on something in your session which is not available in a background session - so the job fails when run in the background.
    Hope this helps,
    Ravi.

  • Error when running setup: Attach to native process

    Hi,
    I try to install the oracle iplanet webserver (version 7.0.13) on linux (centos 6.2, 64 bits).
    I first installed the compat-libstdc++-33 packages.
    After that I installed JRE, this version:
    java version "1.6.0_31"
    Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
    Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
    To install the iplanet webserver I first extract the content of the downloaded file (Oracle-iPlanet-Web-Server-7.0.13-linux-x64).
    Then I try to run setup with the command ./setup
    The error I get is: attach to native process failed.
    Any idea what is wrong?
    Regards Stefan
    Thanks for the help.

    Stefan,
    I am seeing the same error on Ubuntu 12.04 Precise Pangolin (which is admittedly unsupported). This happened to me once before on an older version of Ubuntu, and as I recall, it was a missing library dependency, but I am not able to locate the fix I came up with at that time.
    Interestingly, my existing ws7u12 installation runs fine, I just cannot upgrade to the latest update (15).
    I have this software under support, but only on Solaris (I develop on Ubuntu, but deploy to my production Solaris server). In any case, as Ubuntu is not officially supported, opening an SR for this problem would be a waste of time.
    I'll continue digging for my old solution and post here if I find it. If you've since discovered the solution, I'd love to hear it.
    Thanks,
    Bill

  • Get Running Timer Jobs by PowerShell

    Dear All,
    Kindly, do you know if it is possible to get the running timers jobs using PowerShell..
    We can not write server side code in production as we can not have downtime.
    ... Is there API for it? Is there a limitation? Is there a workaround? 
    We have tried to get latest and check date by greater than that should mean the running one.. but it did not work..
    Regards,
    Mai
    Mai Omar Desouki | Software Consultant | Infusion | MCP, MCTS, MCPD, MCITP, MCT Microsoft Certified Trainer & MCC Microsoft Community Contributor | Email: [email protected] | Blog: http://moresharepoint.wordpress.com

    Hi
    Use below command if you want specific timer job.
     $JobName="mytimer"
    $WebApp = Get-SPWebApplication http://mywebappurl
    $job= Get-SPTimerJob|
    ?{$_.Name -match$JobName}
    | ?{$_.Parent -eq$WebApp}
    for
    all timer job related to webapplication
    $job = Get-SPTimerJob
    -Webapplication $WebApp
    Regards,
    Rajendra Singh
    If a post answers your question, please click Mark As Answer on that post and Vote as Helpful
    http://sharepointundefind.wordpress.com/

  • Can a long running batch job causing deadlock bring server performance down

    Hi
    I have a customer having a long running batch job (approx 6 hrs), recently we experienced performance issue where the job now taking &gt;12 hrs. The database server is crawling. Looking at the alert.log showing some deadlock,
    The batch job are in fact many parallel child batch job that running at the same time, that would have explain the deadlock.
    Thus, i just wondering any possibility that due to deadlock, can cause the whole server to be crawling, even connect to the database using toad is also getting slow or doing ls -lrt..
    Thanks
    Rgds
    Ung

    Kok Aik wrote:
    According to documentation, complex deadlock can make the job appeared hang & affect throughput, but it didn't mentioned how it will make the whole server to slow down. My initial thought would be the rolling back and reconstruct of CR copy that would have use up the cpu.
    I think your ideas on rolling back, CR construction etc. are good guesses. If you have deadlocks, then you have multiple processes working in the same place in the database at the same time, so there may be other "near-deadlocks" that cause all sorts of interference problems.
    Obviously you could have processes queueing for the same resource for some time without getting into a deadlock.
    You can have a long running update hit a row which was changed by another user after the update started - which woudl cause the long-running update to rollback and start again (Tom Kyte refers to this as 'write consistency' if you want to search his website for a discussion on the topic).
    Once concurrent processes start sliding out of their correct sequences because of a few delays, it's possible for reports that used to run when nothing else was going on suddenly finding themselves running while updates are going on - and doing lots more reads (physical I/O) of the undo tablespace to take blocks a long way back into the past.
    And so on...
    Anyway, according to the customer, the problem seems to be related to the lgpr_size as the problem disappeared after they revert it back to its orignial default value,0. I couldn't figure out what the lgpr_size is - can you explain.
    Thanks
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking" Carl Sagan

  • Can a execute the job in parallel.?

    Please find below my DBMS_SHEDULER job:
    BEGIN
    DBMS_SCHEDULER.CREATE_JOB
    (job_name => 'job1',
    job_type => 'PLSQL_BLOCK',
    job_action => 'BEGIN pk_dt_auto.pr_fire_process(''764''); END;'
    start_date => SYSDATE,
    repeat_interval =>'FREQ = Minutely; INTERVAL = 'SYSDATE+ 30/86400'
    END;
    Can i run the job twice in parallel?..

    Hi,
    For dbms_scheduler (and dbms_job) once a job is running it will NOT be started again until after it has finished. The scheduler ensures that only one instance of a job is running at a given time.
    There is a way around this however. In your job, instead of doing the manipulations, create a simple one-time job with a unique name (dbms_scheduler.generate_job_name) that does the manipulations. Since creating a job is fairly quick the main job will finish quickly and be rescheduled for after the interval but the one-time job will continue doing the work in the background .
    Hope this helps,
    Ravi.

Maybe you are looking for