Extracting JOB_ID within the DS Job

We have the following requirements:
Web Client calls a batch job.
We do not want the webclient to continually call the DI WebService to check the status of the job. So, when the DI job is finished, as the last step, we would like the job to call a web service to indicate that it finished (or failed in a Catch block).
To support this, we would like to pass the Job ID as a parameter to this Web Service (theoretically, there could be multiple instances of the same job running parallely, so the name would not be enough).
Any ideas how we can get the Job ID in the job, or support the above requirements?
Thanks,
Sandeep

you can use information stored in ALVW_HISTORY view
if you are running mulitple instances of same job, then it would be difficult to identify the row for the running job from with in the job, I don't think there is a buit-in fuction wihch gives you the runid of the job
since trace, monitor and error log files are unique for a job instance, you could use get_trace_filename() in WHERE clause to identify the row for the job in combination of other coumn like END_TIME not null, machine_name , port number etc
the TRACE_LOG column will store the complete path of the file, in case of windows you will combination of / \ as path separator but get_trace_filename() will return only / in the output as path separator, you may have to do a replace while comparing

Similar Messages

  • Triggering a background job of class 'A' from within the ABAP program

    Dear All,
    We are implementing SAP ECC 6.0 on IBM System i, i5/OS V5R4,  SAP kernel 7.00, kernel patch level 173.
    Is there a way to control that when a background job is triggered from within an ABAP program using the : CALL FUNCTION 'JOB_OPEN' statement, the background job is of class A ?
    I know that through transaction SM37, the job class for a background job can be changed manually, but the situation is an outsource company did for us some changes in the native SAP ABAP programs related to some SAP native transactions, and those programs trigger at their end some background jobs, each job running with the name of user running the transaction.
    Through SM37, I can't find a template background job, to be changed to have class 'A'
    The following is an excerpt from the ABAP code, bearing the CALL FUNCTION 'JOB_OPEN' statement :
    FUNCTION z_cs_technical_completion.
    ""Local Interface:
    *"  IMPORTING
    *"     VALUE(AUFNR) TYPE  VBRP-AUFNR
      DATA jobcount TYPE tbtcjob-jobcount.
      CALL FUNCTION 'JOB_OPEN'
        EXPORTING
          jobname                = 'CS_TECH_COMPLETE'
      SDLSTRTDT              = NO_DATE
      SDLSTRTTM              = NO_TIME
      JOBCLASS               =
       IMPORTING
         jobcount               = jobcount
    CHANGING
      RET                    =
       EXCEPTIONS
         cant_create_job        = 1
         invalid_job_data       = 2
         jobname_missing        = 3
         OTHERS                 = 4
      SUBMIT zcs_technical_completion
              WITH p_aufnr EQ aufnr
                AND RETURN
              VIA JOB 'CS_TECH_COMPLETE'
              NUMBER jobcount.
      CALL FUNCTION 'JOB_CLOSE'
        EXPORTING
          jobcount             = jobcount
          jobname              = 'CS_TECH_COMPLETE'
          strtimmed            = 'X'
        EXCEPTIONS
          cant_start_immediate = 1
          invalid_startdate    = 2
          jobname_missing      = 3
          job_close_failed     = 4
          job_nosteps          = 5
          job_notex            = 6
          lock_failed          = 7
          invalid_target       = 8
          OTHERS               = 9.
    ENDFUNCTION.
    Thank you in advance for your cooperation.
    Best regards.
    Reda Khalifa

    Dear Darren,
    Thank you very much for your cooperation and for your prompt reply.
    Could you please explain to me how to find out the template background job that was first used, or in other words, how things were set up in the first place, i.e. when first the ABAP program was written and executed, there had to be at least one background job created through transaction SM36 ?
    Thank you in advance for your cooperation.
    Best regards.
    Reda Khalifa

  • After creating jobs within the project what is teh process upload to prod

    I have a project with 6 jobs within it. now what is the process to upload or next steps.
    What i noticed another user tried to access my project under that he is 'nt able to see all jobs, only few he was able to see.
    This is my first task working on, really appreciate if you can advice me  the next appropriate steps
    thank you very much for the helpful info.

    I have a project with 6 jobs within it. now what is the process to upload or next steps.
    Go to Tools> Export. Collect your objects there from Local Object Library. Right click and export. This can be exported as ATL / XML files or can be directly exported to another repository. Remember to change the DataStore Configurations, table owners etc. suitable for the other env.
    If it is within the DEV system itself, you can configure a Central Repository through which you can share objects to other local repositories.
    What i noticed another user tried to access my project under that he is 'nt able to see all jobs, only few he was able to see.
    Other user might have accessed the same project in a different local repository and the version available inside that local repository might not be what you have.
    Regards,
    Suneer

  • Adding hyperlink within the Job Posting description in Peoplesoft TAM

    Hi All -
    Is it possible to put a hyperlink in the Job Posting description in TAM. The entire Job posting description is retrieved from the Job Posting Index file which gets created running the delivered process (Verity Job Posting Index Build: HRS_JSCH_IDX). When i put the hyperlink in the Job posting description, it comes as a regular text. If anyone worked on this type of requirement, please let me know how to achieve this. To give an idea, below is what is required:
    Job Positing Description:
    General Duties:
    abcdefg abcdefg abcdefg abcdefg abcdefg abcdefg abcdefg abcdefg abcdefg abcdefg abcdefg
    abcdefg abcdefg abcdefg abcdefg abcdefg
    Qualifications:
    abcdefg abcdefg abcdefg abcdefg. See www.xyz.com to read more about this. abcdefg abcdefg abcdefg
    Other Qualifications:
    abcdefg abcdefg abcdefg.
    Edited by: user12134433 on Dec 14, 2010 9:12 AM
    Edited by: user12134433 on Dec 15, 2010 5:44 AM

    The Job Index is only for when the applicant does a search. When they are looking at the Job Openings before or after they have signed in it is not using the results of the job index build app engine so you can put the link on the page yourself.

  • Delivery Schedule within the Firm Zone of a SA not firming

    Hello,
    We have delivery schedules within firm zone (defined in the Additional Data in item detail) of a SA but they are not firmed by MRP run.
    Here is the material master setting and other relevant information:
    MRP Type = PD (firming type = blank)
    Planning Time Fence = 0 (in MRP1 view)
    Not check the flag in OMIN to firm only the transmitted schedule line
    Maintained Firm/Trade-off zone (as mentioned above) and Binding on MRP = '1' in an item of a SA
    The delivery schedule supposedly be firmed based on the firm zone but is keep pushed out instead after MRP run.
    I appreciate very much if you can tell what's causing the problem and how to remedy it.
    Kind Regards,
    Eddie

    Eddie,
    Your batch job is OK however until the delivery schedule with new schedule line proposals has been transmitted they can be changed by MRP as they are not yet firm.
    From my experience the OMIN settings affect the MRP type firming settings (P1, P2, ...) not the SA related firm settings. Here the binding on MRP key is determining what MRP can change or can not change.
    See here the extract out of the help function on the OMIN setting.
    Firm Only Schedule Lines Transmitted to Vendor by Purchasing
    If you set this indicator, the system only firms the schedule lines which have been transmitted to the vendor by the purchasing department.
    Use
    This function is used if you use a planning time fence in the planning run. If you use firming type 1 ("Automatic firming and order proposals are displaced"), the system automatically firms existing schedule lines when they move into the planning time fence.
    Use this indicator if you only want the system to firm schedule lines that have already been checked and passed on to the vendor per message transfer. If, for example, the quantity of a schedule line was changed but the changed quantity has not yet been passed on to the vendor, then the old quantity that has already been passed on to the vendor is firmed as soon as this schedule line moves into the planning time fence.

  • BODS 3.1 : SAP R/3 data extraction -What is the difference in 2 dataflows?

    Hi.
    Can anyone advise as to what is the difference  in using the data extraction flow for extracting Data from SAP R/3 ?
    1)DF1 >> SAPR/3 (R3/Table -query transformation-dat file) >>query transformation >> target
    This ABAP flow generates a ABAP program and a dat file.
    We can also upload this program and run jobs as execute preloaded option on datastore.
    This works fine.
    2) We also can pull the SAP R/3 table directly.
    DF2>>SAPR/3 table (this has a red arrow like in OHD) >> Query transformation >> target
    THIS ALSO Works fine. And we are able to see the data directly into oracle.
    Which can also be scheduled on a job.
    BUT am unable to understand the purpose of using the different types of data extraction flows.
    When to use which type of flow for data extraction.
    Advantage / disadvantage - over the 2 data flows.
    What we are not understanding is that :
    if we can directly pull data from R/3 table directly thro a query transformation into the target table,
    why use the Flow of creating a R/3 data flow,
    and then do a query transformation again
    and then populate the target database?
    There might be some practical reasons for using these 2 different types of flows in doing the data extraction. Which I would like to understand.  Can anyone advise please.
    Many thanks
    indu
    Edited by: Indumathy Narayanan on Aug 22, 2011 3:25 PM

    Hi Jeff.
    Greetings. And many thanks for your response.
    Generally we pull the entire SAP R/3 table thro query transformation into oracle.
    For which we use R/3 data flow and the ABAP program, which we upload on the R/3 system
    so as to be able to use the option of Execute preloaded - and run the jobs.
    Since we do not have any control on our R/3 servers nor we have anyone on ABAP programming,
    we do not do anything at the SAP R/3 level
    I was doing this trial and error testing on our Worflows for our new requirement
    WF 1 : which has some 15 R/3 TABLES.
    For each table we have created a separate Dataflow.
    And finally in between in some dataflows, wherein, the SAP tables which had lot of rows, i decided to pull it directly,
    by-passing the ABAP flow.
    And still the entire work flow and data extraction happens ok.
    In fact i tried creating a new sample data flow and tested.
    Using direct download and - and also execute preloaded.
    I did not see any major difference in time taken for data extraction;
    Because anyhow we pull the entire Table, then choose whatever we want to bring into oracle thro a view for our BO reporting or aggregate and then bring data as a table for Universe consumption.
    Actually, I was looking at other options to avoid this ABAP generation - and the R/3 data flow because we are having problems on our dev and qa environments - giving delimiter errors.  Whereas in production it works fine. Production environment is a old set up of BODS 3.1. QA and Dev are relatively new enviornments of BODS. Which is having this delimiter error.
    I did not understand how to resolve it as per this post : https://cw.sdn.sap.com/cw/ideas/2596
    And trying to resolve this problem, I ended up with the option of trying to pull directly the R/3 table. Without using ABAP workflow.  Just by trial and error of each and every drag and drop option. Because we had to urgently do a POC and deliver the data for the entire e recruiting module of SAP. 
    I dont know whether i could do this direct pulling of data - for the new job which i have created,
    which has 2 workflows with 15 Dataflows in each worflow.
    And and push this job into production.
    And also whether i could by-pass this ABAP flow and do a direct pulling of R/3 data, in all the Dataflows in the future for ANY of our SAP R/3 data extraction requirement.  And this technical understanding is not clear to us as regards the difference between the 2 flows.  And being new to this whole of ETL - I just wanted to know the pros and cons of this particular data extraction. 
    As advised I shall check the schedules for a week, and then we shall move it probably into production.
    Thanks again.
    Kind Regards
    Indu
    Edited by: Indumathy Narayanan on Aug 22, 2011 7:02 PM

  • Data flows are getting started but not completing successfully while extracting/loading of the data

    Hello People,
    We are facing a abnormal behavior with the dataflows in the data services job.
    Scenario:
    We are extracting the data from CRM end in parallel. Please refer the build:
    a. We have 5 main workflows flows i.e :
       => Main WF1 has 6 more sub Wf's in it, in which each sub Wf has 1/2 DF's associated in parallel.
       => Main WF2 has 21 DF's and 1 WFa->with a DF & a WFb. WFb has 1 DF in parallel.
       => Main WF3 has 1 DF in parallel.
       => Main WF4 has 3 DF in parallel.
       => Main WF5 has 1 WF & a DF in sequence.
    b. Regularly the job works perfectly fine but, sometimes it gets stuck at the DF’s without any error logs.
    c. Job doesn’t stuck at a specific dataflow or on a specific day, many a times it strucks at different DF’s.
    d. Observations in the Monitor Log:
    Dataflow---------------------- State----------------RowCnt------LT-------AT------ 
    +DF1/ZABAPDF
    PROCEED
    234000
    8.113      394.164
    /DF1/Query
    PROCEED
    234000
    8.159      394.242
    -DF1/Query_2
    PROCEED
    234000
    8.159      394.242
    Where LT: Lapse Time and AT: Absolute time
    If you check the monitor log, the State of the Dataflow DF1 remains PROCEED till the end, ideally it should complete.
    In successful jobs, the status for DF1  is STOP . This DF takes approx. 2 min to execute.
    The row count for DF1 extraction is 234204 but, it got stuck at  234000.
    Then we terminate the job after sometime,but for surprise it gets executed successfully on next day.
    e. As per analysis over all the failed jobs, same things were observed over the different data flows that got stuck during the execution.Logic related to the data flows is perfectly fine.
    Observations in the Trace log:
    DATAFLOW: Process to execute data flow <DF1> is started.
    DATAFLOW: Data flow <DF1> is started.
    ABAP: ABAP flow <ZABAPDF> is started.
    ABAP: ABAP flow <ZABAPDF> is completed.
    Cache statistics determined that data flow <DF1>
    uses <0>caches with a total size of <0> bytes. This is less than(or equal to) the virtual memory <1609564160> bytes available for caches.
    Statistics is switching the cache type to IN MEMORY.
    DATAFLOW: Data flow <DF1> using IN MEMORY Cache.
    DATAFLOW: <DF1> is completed successfully.
    The highlighted text in the trace log is not appearing in the unsuccessful job but, it appears for the successful one.
    Note: The cache type is pageable cache, DS ver is 3.2.
    Please suggest.
    Regards,
    Santosh

    Hi Santosh,
    just a wild guess.
    Would you be able to replicate all the DF\WF , delete original DF\WF, rename replicated objects to original to DF\WF names(for your convenience)   and excute it.
    Some time reference does not work.
    Hope this should work.
    Regards,
    Shiva Sahu

  • Background job within another background job

    Hello Experts,
    I have a BDC program (for BW tcode OLI7BW) which executes by scheduling a number of background jobs. This report works fine when run manually. But doesn't work if it is scheduled as a background job through SM36. The status of the job is shown as finished, but the data is not uploaded.
    Is it not possible to execute a background job within another background job?
    If it's possible, what could be the possible cause of error?
    Thanks In Advance
    Radhika

    Hi Radhika,
    If you are trying to upload data from a file on ur desktop to Internal table, then background job doesn't work. Always remember GUI means ur front end and all background jobs are run on application server and they dont run w.r.t ur desktop.
    Kindly check it and get back to me incase of any queries.
    Dont forget to reward points, if found useful.
    Thanks and Regards,
    Satyesh

  • How to schedule the background job daily twice?

    Hi,
    How to schedule the background job daily twice? any conditions?
    Regards,
    Srihitha

    see the step by step procedure.
    Scheduling Background Jobs:
    1. Background jobs are scheduled by Basis administrators using transaction SM36.
    2. To run a report in a background, a job needs to be created with a step using the report name
    and a variant for selection parameters. It is recommended to create a separate variant for each
    scheduled job to produce results for specific dates (e.g. previous month) or organizational units (e.g.
    company codes).
    3. While defining the step, the spool parameters needs to be specified
    (Step-> Print Specifications->Properties) to secure the output of the report and help authorized users
    to find the spool request. The following parameters needs to be maintained:
    a. Time of printing: set to “Send to SAP spooler Only for now”
    b. Name – abbreviated name to identify the job output
    c. Title – free form description for the report output
    d. Authorization – a value defined by Security in user profiles to allow those users to access
    this spool request (authorization object S_SPO_ACT, value SPOAUTH). Only users with matching
    authorization value in their profiles will be able to see the output.
    e. Department – set to appropriate department/functional area name. This field can be used in
    a search later.
    f. Retention period – set to “Do not delete” if the report output needs to be retained for more
    than 8 days. Once the archiving/document repository solution is in place the spool requests could
    be automatically moved to the archive/repository. Storage Mode parameter on the same screen
    could be used to immediately send the output to archive instead of creating a spool request.
    Configuring user access:
    1. To access a report output created by a background job, a user must have at
    least access to SP01 (Spool requests) transaction without restriction on the user
    name (however by itself it will not let the user to see all spool requests). To have
    that access the user must have S_ADMI_FCD authorization object in the profile with
    SPOR (or SP01) value of S_ADMI_FCD parameter (maintained by Security).
    2. To access a particular job’s output in the spool, the user must have
    S_SPO_ACT object in the profile with SPOAUTH parameter matching the value used
    in the Print Specifications of the job (see p. 3.d above).
    3. Levels of access to the spool (display, print once, reprint, download, etc) are
    controlled by SPOACTION parameter of S_SPO_ACT. The user must have at least
    BASE access (display).
    On-line reports:
    1. Exactly the same configuration can be maintained for any output produced
    from R/3. If a user clicks “Parameters” button on a SAP Printer selection dialog, it
    allows to specify all the parameters as described in p. 3 of
    “Scheduling background jobs” section. Thus any output created by an online report
    can be saved and accessed by any user authorized to access that spool request
    (access restriction provided by the Authorization field of the spool request
    attributes, see p. 3.d of “Scheduling background jobs” section).
    Access to report’s output:
    1. A user that had proper access (see Configuring user access above) can
    retrieve a job/report output through transaction SP01.
    2. The selection screen can be configured by clicking “Further selection
    criteria…” button (e.g. to bring “Spool request name (suffix 2)” field or hide other
    fields).
    3. The following fields can be used to search for a specific output (Note that
    Created By must be blank when searching for scheduled job’s outputs)
    a. Spool request name (suffix 2) – corresponds to a spool name in p. 3.b in
    “Scheduling background jobs” section above).
    b. Date created – to find an output of a job that ran within a certain date range.
    c. Title – corresponds to spool Title in p. 3.c in “Scheduling background jobs”
    section above).
    d. Department - corresponds to spool Department in p. 3.e in “Scheduling
    background jobs” section above).
    4. Upon entering selection criteria, the user clicks the Execute button to
    retrieve the list of matching spool requests.
    5. From the spool list the user can use several function such as view the
    content of a spool request, print the spool request, view attributed of the spool
    request, etc. (some functions may need special authorization, see p.3 in
    Configuring user access)
    a. Click the Print button to print the spool request with the default attributes
    (usually defined with the job definition). It will print it on a printer that was
    specified when a job was created.
    b. Click the “Print with changed attributed” button to print the spool request
    with the different attributes (e.g. changing the printer name).
    c. Click the “Display contents” button to preview the spool request contents. A
    Print and Download functions are available from the preview mode.

  • Can U set breakpoint within a batch job and look at variables?

    Hello friends,
    I am trying to solve a problem that occurs within a program / transaction which can only be executed in background. The transaction in question is FPCOPARA and apparently this program cannot be executed in foreground.
    If I understand well, we cannot set breakpoints within a background job and as a result we cannot inspect variable etc. during job execution. So the question is how to achieve the same goal within a batch job? How did you do it? As this is a standard SAP transacrtion, no program modification can be applied.
    Your help is greatly appreciated.

    Hi......
    After u have executed ur batch job..
    go to sm37 >>select ur job using checkbox>>enter 'JDBG' in transaction box and press enter
    now debugger will start ..initial it will go through system code..after after a while the debugger will reach to your code and den you can debug the remaining report....
    all the best
    regards
    vivek

  • Error submitting tasks: Operation could not be completed within the specified time

    I've created a Batch Application and I'm having an issue running a job.  Its suppose to synchronize 2 azure blob containers.
    In my JobSplitter, I compare 2 azure blob containers and find the files in the first container that need to be copied over to the second one.  Just figuring out which files to copy over can take about 5 to 6 minutes the first time this is run depending
    on the number of files that need to get copied over.
    Once we have the files that need to be copied, we group them into sets and create a task for each set.
    The exe that the TaskProcessor runs doesn't do anything just yet. All it does is write the arguments sent to it into a file. Nothing major.
    On the last run, about 900 tasks were created. Four of the tasks completed successfully. I can download the output files for them just fine. The next 4 tasks failed with an error.
    Looking in the JobLogs, I see this:
    The job orchestrator finished
    Failed to process job ac8dd4a3-d882-4ec1-847d-48844fa3e1f4: Error submitting tasks: Operation could not be completed
    within the specified time. RequestId:27eab612-c8ce-4e4b-8dfb-00e7024c71ec Time:2015-01-29T00:37:03.5943898Z
    Has anyone seen this before?
    Thanks,
    matt

    This error is usually a transient error, typically related to an internal storage timeout.  We're working on a fix for it.
    In the meantime, if you go into the Batch Apps portal, select the failed job and click Reprocess in the command bar, Batch Apps should re-run the failed tasks and the not-run tasks.  (Note that the tasks that already succeeded will not be re-run, and
    we will not re-run the job splitter to create new tasks.)
    Please let us know if you continue to see this problem or if reprocessing the job doesn't work for you.

  • Making an RFC call from within the VM container

    Hi all,
    since a long time I am searching for information on how to implement an RFC call from within the VMC. The problem is that we have implemented several (p)functions in ABAP and we need to implement them in JAVA.
    Now I am searching for a way how to just call the already existing pfunctions???
    Is it possible to read CRM DB tables too?
    Thank you in advance
    Boris

    Hi Freeto,
    This may be due to the Network Failures.
    If you have triggered a job then because of the Network fluctuations the system may not respond properly and cannot execute the job.
    So, this is the cause for the failure.
    Hope you understood.
    With Regards,
    Ravi Kanth

  • "Who ran me" - how to determine the name of the dbms_scheduler job that ran me

    Hi Community
    I can see plenty of examples out on the interweb which shows how you can use dbms_utility.format_call_stack to find the hierarchy of procs, functions and packages that got me to a particular point in my code.
    For example, if proc (procedure) A calls proc B, which in turn calls proc C, in the code for proc C, I can query the call stack to find out that proc C was called by proc B which in turn was called by proc A
    However, I want to extend this further.
    For example, using the example above, if proc A in turn was started by a dbms_scheduler job, I want to determine (within proc C) the name of the dbms_scheduler job which started the whole process off.
    The reason I want to do this is that I have inherited a (massive) system which is undocumented. In many places within the code, email alerts are sent out using a custom "MAIL" package to designated users (now including me) when certain long-running processes reach certain milestones and/or complete.
    I have added to the custom "MAIL" package a trailer on the mails to show the call stack. I also want to show the name of the dbms_scheduler job which started it all.
    Over time, this info may help me in building the "map" of how the whole undocumented system hangs together and in the meantime, to assist in troubleshooting problems
    Looking forward to hearing from you
    Alan

    Use USER_SCHEDULER_RUNNING_JOBS or DBA_SCHEDULER_RUNNING_JOBS there is column SESSION_ID and when you know your session ID build query is very simple.
    select owner, job_name
    into ...
    from dba_scheduler_runnig_jobs
    where session_id=sys_context('USERENV','SESSIONID');
    You must declare local variables in PL/SQL procedure to read owner and job_name into them. Second thing, you must handle possible exception no_data_found than can be raised when procedure is not run from job.

  • Using an WD Component twice within the same Component

    Hi all,
    our company has made a WD Component for editing business partners.
    My job is to build a WD Component where two business partners (different roles) are edited within the same window.
    My idea was to make 2 separate component usages of the Business Partner WD Component within my component.
    This just works fine. But there is a problem when mapping the context of both Business Partner-Components into my WD Component. There are always naming conflicts. This is because it is not allowed that there are two nodes with the same name in the context, even if they are in different subnodes. Renaming the nodes after mapping them in my component also doesn't work, because it is not possible to rename nested nodes.
    Does anyone know a solution?
    Kind regards,
    Florian

    Hi Anurag,
    thank you for giving me help. You are right, it is possible to use a component twice within the same webdynpro component.
    But the problem is the context mapping in the target component.
    Let me give you an example.
    This is the context of the twice used component.
    CONTEXT
    |
    |->NODE_1
    |  |
    |  |->SUBNODE_1
    |  |  |
    |  |  |->SUBSUB_NODE_1
    |  |  |  |
    |  |  |  |->ATTRIBUTE_1_1_1
    |  |  |  |->ATTRIBUTE_1_1_2
    |  |  |->SUBSUB_NODE_2
    |  |  |->SUBSUB_NODE_3
    |  |  |->ATTRIBUTE_1_1
    |  |  |->ATTRIBUTE_1_2
    |  |->ATTRIBUTE_1
    |  |->ATTRIBUTE_2
    |  |->ATTRIBUTE_3
    Now, if i map this context (NODE_1) to the target web dynpro component I have a problem. I have to map it twice (one time for each used component), so that i can access both used components. But WebDynpro only allows me to rename the node NODE_1. Mapped subnodes (SUBNODE_1, ...) cannot be renamed. So i cannot map the context of both used components, because there are always naming conflicts. WebDynpro doesn't allow that there are two nodes with identical names within the context, even if they are in different subnodes.
    Is there a solution? We really need one.
    Thanks
    Florian

  • Using CTRL + F not working to find any text within the query result

    Hi friends,
    I am trying to use find option to find any text within the result section of a query but when I do CTRL+F  the find window appears and I can input the text that I wanted. But when I press Enter it is not giving me any result though the entered text
    is in the query result. I am using Sql Server 2012 Management studio. Any Body please help me. There is a job that I have to search certain things after running a script and now I am doing copy paste to excel and doing searching from there. 

    Prashant,
    Looks like you are trying to do in SSMS, and it is because the result set is in Grid view , if you want to use CTL + F then your result should be in Text format, try to do using CTL + T ( To get result in Text) and then use CTL + F to find any
    text.
    Thanks
    Manish
    Please click Mark as Answer if my post solved your problem and click
    Vote as Helpful if this post was useful.

Maybe you are looking for

  • Excise Duty posting

    Hello Experts We have following issue. A PO was created 1000 KGS and the first GRN was done for 800 KGS for batch A1234. At the time of GRN, user entered entire amount of excise duty that was applicable to 1000 kgs. We issued almost 500 KGS of produc

  • Space designer causes crash - any suggestions??

    Hello people I just started running logic 8 on a new imac and I have a big problem with space designer. Everything will be going sweet (well kinda). I open SD and things are still fine, but if I try to open any preset then it crashes. Complete freeze

  • Mail and Previews

    Whenever I receive an email with an attachment the attachment does not show the accurate preview. For example when I receive a pdf fax every fax following shows the same preview when I scroll down the various emails. Entourage doesn't do this? What g

  • Ipod/itunes/quicktime not working

    i dled a movie and then converted it useing videora, i put it into itunes it doesnt add i try playing it with quicktime, it says Error -37: a bad filename or volume name was encountered im getting ****** because now everytime i dl a movie this happen

  • RME Fireface 800 Help

    I'm gonna be short and sweet on this one, sorry. Me and a few colleagues recently purchased the RME FireFace 800 Interface with a multitude of other things to start a label. Our problem is we cant seem to get Adobe Audition 3 to recognize/record with