T-code SUIM taking much more time for generating output for profie change.

Hi All
We want to extract report for profile addition and deletion for Users in ECC 6. While executing the the t-code SUIM it is taking much more time (taking more than 20 hrs). This problem is coming after patch application
Please give the solution/Suggest to minimize the time taken in report generation.
Thanks-
Guru Prasad Dwivedi

Hello Prasad,
The reason for the performance trouble is a new feature regarding the user change documents. Since note 874850 and 1015043 you will get a more complete overview about the changes regarding a user.
The disadvantage of that new feature is, that in some customer system usage scenario, the performance is very poor. That's the case, if the central change documents are intensivly used also by other applications and the tables CDPOS, CDHDR, ... contains a very big count of rows. Unfortunatly the user change documents can not be searched by the key columns of the central change docs. - so the bad response time can explained.
What now ... ?
There are some work arounds to get the change documents on faster way.
1st. - You can get the former report output and performance if you
       would use the report RSUSR100 instead of the new RSUSR100N in
       separate mode.
2nd. - If you want to use the new report RSUSR100N directly and only
       want to get the information about the traditional topics
       (content of USH* tables) you should only mark the search areas
       on the tabstrip 'user attributes') to get a better performance.
     - furthermore limit the date range, if possible
3rd. - You should regulary (monthly) archive the user relevant documents
       for PFCG and IDENTITY from the central change documents.
       As per our note 1079207 chapter 3 you can reload that archives
       into more selective tables.
       The selection for change documents will be rather faster over
       reloaded archived documents than the documents in the
       central change documents tables.
Best Regards,
Guilherme de Oliveira.

Similar Messages

  • Test program running taking much more time on high end server T5440 than low end server  T5220

    Hi all,
    I have written the following program and run on both T5440  [1.4 GHz, 95 GB RAM, 32 cores(s), 256 logical (virtual) processor(s),] and  T5220 [(UltraSPARC-T2 (chipid 0, clock 1165 MH) , 8GB RAM, 1 core, 8 virtual processors )] on same OS version.  I found that T5540 server takes more time than T5220. Please find below the details.
    test1.cpp
    #include <iostream>
    #include <pthread.h>
    using namespace std;
    #define NUM_OF_THREADS 20
    struct ABCDEF {
    char A[1024];
    char B[1024];
    void *start_func(void *)
        long long i = 6000;
        while(i--)
                    ABCDEF*             sdf = new ABCDEF;
                    delete sdf;
                    sdf = NULL;
        return NULL;
    int main(int argc, char* argv[])
        pthread_t tid[50];
        for(int i=0; i<NUM_OF_THREADS; i++)
                    pthread_create(&tid[i], NULL, start_func, NULL);
                    cout<<"Creating thread " << i <<endl;
        for(int i=0; i<NUM_OF_THREADS; i++)
                    pthread_join(tid[i], NULL);
                    cout<<"Waiting for thread " << i <<endl;
    After executing the above program on T5440 takes :
    real 0.78
    user 3.94s
    sys 0.05
    After executing the above program on T5220 takes :
    real 0.23
    user 1.43s
    sys 0.03
    It seems that T5440 which is high end server takes almost 3 times more time than T5220 which is low end server.  
    However, I have one more observation. I tried the following program :
    test2.cpp
    #include <iostream>
    #include <pthread.h>
    using namespace std;
    #define NUM_OF_THREADS 20
    struct ABCDEF {
    char A[1024];
    char B[1024];
    int main(int argc, char* argv[])
        long long i = 6000000;
        while(i--)
            ABCDEF*  sdf = new ABCDEF;
            delete sdf;
            sdf = NULL;
        return 0;
    It seems that T5440 server is fast in this case as compaired to T5220 server.
    Could anyone please help me out the exact reason for this behaviour as my application is slow as well on this T5440 server. I have posted earlier as well for the same issue. 
    Thanks in advance !!!
    regards,
    Sanjay

    You already asked this question...
    48 hours earlier, and in the same Solaris forum space
    Repeating the post isn't going to get you a response any faster, and actually now have people NOT respond because you are not showing any patience.
    These are end-user community forums, not a place to expect Oracle Technical Support.   There is no obligation that there be a response.
    If you have a business-critical issue and hope to get accurate and timely response, then use your service contract credentials to open a Support request.
    This new redundant post is locked.
    Edit:
    It appears that at the same time the O.P. posted this redundant thread, they also posted the same question to at least one other forum web site:
    http://www.unix.com/solaris/229269-test-program-running-taking-much-more-time-high-end-server-t5440-than-low-end-server-t5220.html

  • After upgrading to lion starting up is taking much more time ??

    After upgrading to lion starting up is taking much more time is there any solution to this ?
    i do all things about cleaning disk permission or editing .

    Have you check what is being started at the same time?
    Lion will start more applications then were being started in the past.
    Allan

  • OR is taking much more time than UNION

    hi gems..
    i have written a query using UNION clause and it took 12 seconds to give result.
    then i wrote the same query using OR operator and then it took 78 seconds to give the resultset.
    The tables which are referred by this qurey have no indexes.
    the cost plans for the query with OR is also much more lesser than that with UNION.
    please suggest why OR is taking more time.
    thanks in advance

    Here's a ridiculously simple example.  (these tables don't even have any rows in them)
    If you had separate indexes on col1 and col2, the optimizer might use indexes in the union but not in the or statement:
    Which is faster will depend on the usual list of things.
    Of course, the union also requires a sort operation.
    SQL> create table table1
      2  (col1 number, col2 number, col3 number, col4 number);
    Table created.
    SQL> create index t1_idx1 on table1(col1);
    Index created.
    SQL> create index t1_idx2 on table1(col2);
    Index created.
    SQL> explain plan for
      2  select col1, col2, col3, col4
      3  from table1
      4  where col1> = 123
      5  or col2 <= 456;
    Explained.
    SQL> @xp
    | Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |        |     1 |    52 |     2   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| TABLE1 |     1 |    52 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1">=123 OR "COL2"<=456)
    SQL> explain plan for
      2  select col1, col2, col3, col4
      3  from table1
      4  where col1 >= 123
      5  union
      6  select col1, col2, col3, col4
      7  from table1
      8  where col2 <= 456;
    Explained.
    SQL> @xp
    | Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |         |     2 |   104 |     4  (75)| 00:00:01 |
    |   1 |  SORT UNIQUE                  |         |     2 |   104 |     4  (75)| 00:00:01 |
    |   2 |   UNION-ALL                   |         |       |       |            |          |
    |   3 |    TABLE ACCESS BY INDEX ROWID| TABLE1  |     1 |    52 |     1   (0)| 00:00:01 |
    |*  4 |     INDEX RANGE SCAN          | T1_IDX1 |     1 |       |     1   (0)| 00:00:01 |
    |   5 |    TABLE ACCESS BY INDEX ROWID| TABLE1  |     1 |    52 |     1   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN          | T1_IDX2 |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("COL1">=123)
       6 - access("COL2"<=456)

  • HT201274 My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON

    My iphone4 is taking more than 5 hours to erase all the data still it is under process , still how much more time i have to wait for my mobile to ON ?

    I'm having this EXACT same problem with my iPhone 4, and I have the same computer stats (I have a Samsung Series 7)

  • SSRS report is consuming much more time to fetch data from DB even direct run SP takes less than a second

    Hi,
    we are using SQL SERVER 2008R2 X64 RTM version. 
    One of the SSRS report designed by developer  is  consuming  much more time( 5 to 6 minutes ) to fetch data from DB. Even direct run of  Stored Procedure (Called in report )takes less than a second to display the result set.
    Please help.
    Regards Naveen MSSQL DBA

    Hi Naveen,
    Based on my understanding, you spend a little time retrieving data with a stored procedure from database in dataset designer. However, it takes long time to run the report to display the data, right?
    In Reporting Services, the total time to generate a report include TimeDataRetreval, TimeProcessing and TimeRendering. In your scenario, since you mentioned retrieving data costs a little time, you should check the table
    Executionlog3 in the ReportServer database to find which section costs most of time, TimeProcessing or TimeRendering. Then you can refer to this article to optimize your report:
    Troubleshooting Reports: Report Performance.
    Besides, if parameters exist in the report, you should declare variables inside of the stored procedure and assign the incoming parameters to the variables. For more information, please refer to the similar thread:
    Fast query runs slow in SSRS.
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Why this Query is taking much longer time than expected?

    Hi,
    I need experts support on the below mentioned issue:
    Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time.  Below, please find the DDL & DML:
    DDL
    BHDCollections
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BHDCollections](
     [BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
     [GroupMemberid] [int] NOT NULL,
     [BHDDate] [datetime] NOT NULL,
     [BHDShift] [varchar](10) NULL,
     [SlipValue] [decimal](18, 3) NOT NULL,
     [ProcessedValue] [decimal](18, 3) NOT NULL,
     [BHDRemarks] [varchar](500) NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
     [BHDCollectionid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    BHDCollectionsDet
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[BHDCollectionsDet](
     [CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
     [BHDCollectionid] [bigint] NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](18, 3) NOT NULL,
     [Quantity] [int] NOT NULL,
     CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
     [CollectionDetailid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    Banks
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Banks](
     [Bankid] [int] IDENTITY(1,1) NOT NULL,
     [Bankname] [varchar](50) NOT NULL,
     [Bankabbr] [varchar](50) NULL,
     [BankContact] [varchar](50) NULL,
     [BankTel] [varchar](25) NULL,
     [BankFax] [varchar](25) NULL,
     [BankEmail] [varchar](50) NULL,
     [BankActive] [bit] NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
     [Bankid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    Groupmembers
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[GroupMembers](
     [GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
     [Groupid] [int] NOT NULL,
     [BAID] [int] NOT NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
     [GroupMemberid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
    REFERENCES [dbo].[BankAccounts] ([BAID])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
    REFERENCES [dbo].[Groups] ([Groupid])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
    BankAccounts
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BankAccounts](
     [BAID] [int] IDENTITY(1,1) NOT NULL,
     [CustomerID] [int] NOT NULL,
     [Locationid] [varchar](25) NOT NULL,
     [Bankid] [int] NOT NULL,
     [BankAccountNo] [varchar](50) NOT NULL,
     CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
     [BAID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
    REFERENCES [dbo].[Banks] ([Bankid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
    REFERENCES [dbo].[Locations] ([Locationid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
    Currency
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Currency](
     [Currencyid] [int] IDENTITY(1,1) NOT NULL,
     [CurrencyISOCode] [varchar](20) NOT NULL,
     [CurrencyCountry] [varchar](50) NULL,
     [Currency] [varchar](50) NULL,
     CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
     [Currencyid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    CurrencyDetails
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[CurrencyDetails](
     [CurDenid] [int] IDENTITY(1,1) NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](15, 3) NOT NULL,
     [DenominationType] [varchar](25) NOT NULL,
     CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
     [CurDenid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    QUERY
    WITH TEMP_TABLE AS
    SELECT     0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
    UNION ALL
    SELECT     BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
    TEMP_TABLE2 AS
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
    SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
    HAVING COUNT(DSLIPS)<>0;

    Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
    Just
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM
    #tmp Group By CollectionDate,DSLIPS,Bankname
    HAVING COUNT(DSLIPS)<>0;
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Can anyone help me wid d codes for  Generated Program for gate pass

    can anyone help me wid d codes for  Generated Program for gate pass in MM
    Message was edited by:
            Ronei Shedi

    Hi
    There is no std business process in SAP for Gate pass for the Material entry
    before GR stock entry into the stores.
    You have to write a Z program based on the details of Purchase order tables EKKO and EKPO.
    This will mainly check whether the correct PO qty was delivered or not with proper quality.
    So use the PO tables EKKO and EKPO and fetch the data and use it
    Since this is client specific there is no feneralised program for it.
    Reward points if useful
    Regards
    Anji

  • Application Diagnostics takes long long time to generate output report

    Hello All
    Please give me suggestion how can I improve performance of Application Diagnostics. It takes long long time to generate output report for a single transaction, i.e. Invoice.
    Thanks
    Makshud

    Do you have the statistics collected up to date?
    Please make sure you have applied the latest patch as per these docs.
    E-Business Suite Diagnostics Installation Guide [ID 167000.1]
    E-Business Suite Diagnostic Tools FAQ and Troubleshooting Guide for Release 11i and R12 [ID 235307.1]
    If you still have any performance issues when running the tool, I would suggest you log a SR.
    Thanks,
    Hussein

  • Display form at UWL error " Insufficient information for generating output"

    Hi experts,
    I got error while use "display form" button in UWL.
    the error is " Insufficient information for generating output (missing printer, for ex.) "
    ■The ABAP call stack was:
    Form: USEREXIT_TOP_2 of program RPRTEF00
    Form: TOP_OF_PAGE of program RPRTEF00
    TOP-OF-PAGE of program RPRTEF00
    Form: PRINT-REISEVERLAUF of program RPRTEF00
    Form: DRUCKE_REISE of program RPRTEF00
    Form: PRINT_TRIP of program RPRTEF00
    Form: DRUCKE-REISEN of program RPRTEF00
    Form: DRUCKE_PERSONALNUMMER of program RPRTEF00
    Form: %_GET_PERNR of program RPRTEF00
    Form: FILL_INFOTYPE_TABLES_AND_PUT of program SAPDBPNP
    how to solve this?
    thanks

    Can you elaborate on what exactly configured and how and why? I fail to understand what exactly you did because in my company, we used the standard (up until now);that means UWL configuration file "com.sap.pct.erp.mss.tra".
    And for the request-Form handling it has this action:
    <Action name="com.sap.pct.erp.mss.tra.action.DisplayRequestForm" groupAction="" handler="SAPAppLauncher" referenceBundle="com.sap.pct.erp.mss.tra.DisplayForm" returnToDetailViewAllowed="yes" launchInNewWindow="yes" launchNewWindowFeatures="toolbar=no,menubar=no">
          <Properties>
            <Property name="sap.xss.tra.TripNo" value="${item.TripNumber}"/>
            <Property name="display_order_priority" value="10"/>
            <Property name="SAPIntegrator" value="ROLES://portal_content/com.sap.pct/every_user/com.sap.pct.erp.ess.bp_folder/com.sap.pct.erp.ess.roles/com.sap.pct.erp.ess.employee_self_service/com.sap.pct.erp.ess.employee_self_service/com.sap.pct.erp.ess.area_travel_expenses/com.sap.pct.erp.ess.tripform"/>
            <Property name="sap.xss.tra.TripComponent" value="R"/>
            <Property name="sap.xss.tra.PersNo" value="${item.EmployeeNumber}"/>
          </Properties>
        </Action>
    So I'm kind of missing the "common thread" within your development; could you go more into detail?

  • Generating output for just ONE WebHelp page

    Is it possible to generate output in RoboHelp 9 for just a single selected WebHelp topic instead of the entire project?
    This is for a project with hundreds of topics, where occasionally I might make a change to only one of them and need to deploy just that one page of output. But I can only generate output for the entire project. The only options I seem to have are:
    1. Output everything, and ignore or discard everything apart from the one htm page I want
    2. As well as editing the source of the topic, also edit a copy of the previous output.
    I'm assuming the answer is "no", having looked quite hard, but I thought it worth asking. Fortunately, it's not something I need to do very often - our agile methodology means I'm usually deploying an entire project very regularly.
    Thanks
    Nick Shears

    Hi Nick
    Just a few questions here.
    Is the avoidance of generating the entire project related to the Source Control issue or is it related to a publishing issue? For example, you wish to put the single changed file in place on the serverm but you dislike having to copy them all?
    If it's a "copy to server" issue, I might suggest you investigate using the Publish function. The first time you publish, all files are copied from your output folder to the server, But with each subsequent publish action, only the changed files that actually need to be copied are copied.
    If it's related to source control, I suppose you could accomplish the same by simply amending your process a bit. Generate to the empty location, but configure publishing so it publishes from the empty location to your repository. Then only the necessary files are changed.
    Cheers... Rick

  • What's the Oracle Standard for generating Output & logfile for Conc Prog?

    Is there any Oracle Standard for generating output and log files for standard concurrent programs and reports.
    for example: if Error, only Log no output
    if Warning, Log & output
    Complete Normal, Log & output
    Any help is appreciated...
    Thanks,
    Subhadeep

    Is there any Oracle Standard for generating output and log files for standard concurrent programs and reports.
    for example: if Error, only Log no output
    if Warning, Log & output
    Complete Normal, Log & output
    Any help is appreciated...APPS.FND_CONCURRENT -- procedure get_dev_phase_status
    http://etrm.oracle.com/pls/et1211d9/etrm_pnav.show_details?c_name=FND_CONCURRENT&c_owner=APPS&c_type=PACKAGE%20BODY&c_detail_type=source
    Thanks,
    Hussein

  • [svn:fx-trunk] 12207: Fix for [Managed] metadata prevents ASDoc from generating output for setter/getters

    Revision: 12207
    Revision: 12207
    Author:   [email protected]
    Date:     2009-11-25 11:53:15 -0800 (Wed, 25 Nov 2009)
    Log Message:
    Fix for metadata prevents ASDoc from generating output for setter/getters
    QE notes: None
    Doc notes: None
    Reviewed By: Paul
    Bugs: SDK-23940
    Tests run: checkintests, asdoc
    Is noteworthy for integration: No
    Ticket Links:
        http://bugs.adobe.com/jira/browse/SDK-23940
    Modified Paths:
        flex/sdk/trunk/modules/compiler/src/java/flex2/compiler/as3/genext/GenerativeSecondPassEv aluator.java

  • Concurrent Programs taking unusually more time..

    Hi,
    One of our concurrent Requst sets which used to complete within 3 hrs like that is taking unusually large amount of time..We checked for locks but couldnt find any...
    Can anyone advise what could be wrong here..
    Thanks,
    Praveen

    One of our concurrent Requst sets which used to complete within 3 hrs like that is taking unusually large amount of timeI understand that this happen to one concurrent request set only. Enable trace on this request to find out why it takes that long to run.

  • Issue output options for reprinting output for delivery items

    Hi Gurus,
    I want to create the new output for delivery but at item level.
    There is a standard proceduere available for trigger output at item level.
    But I am not able to find the option to re-issue or see the print preview as that is not option is available in the standard menu at VL02N.
    Due to this I am not able to debug the code as during triggering the output at item level both the program and script are called in background.
    So is there any standard option available at item level to see the print preview as we can at header level.
    If you requried any additional information then please let me know. Eagerly waiting for the reply.
    Regards,
    Sagar

    Hi Arun,
    Yes I want output to be trigger at the item level in the delivery.
    If i tirgger the output that output at item level, i see the ouput as green (ie successful)
    but when i got menu and try to issue to output i do not see this output type in the list and shows all header output types only..
    Due to this i am not able to debug my program when doing print preview..
    So i need a way where i can see the my program in debugging mode.
    And VT70 is not useful to me i do no shippment no. but I have only delivery with me.
    Regards,
    Sagar

Maybe you are looking for

  • How can I Replace a Hard Drive on a 160gb iPod Classic?

    How can I Replace a Hard Drive on a 160gb iPod Classic? I am considering buying an iPod Classic off of eBay which is described as 'broken' and having the 'Red X' showing up on the screen. The seller is starting the bid at a very low price, which rais

  • Error Messge: Final Cut Pro requires that your system have a CPU type of G5

    I recently updated my OS to 10.5.8. I also downloaded AVID codec for quicktime. Then, when I went to open FCP on my MacBook Pro (2.6GHz Intel Core 2 Duo), I was hit with the error message Final Cut Pro requires that your system have a CPU type G5; th

  • Hi friends, Problems with Special Characters in Table download to Excel.

    Hi friends, I am using Binary Cache method to download to excel. The problem is that there are certain fields in the back end R/3  that has special characters in it eg : & * £ # etc. As a result, the download is not working for these rows. Please can

  • Calender to be called in Forms

    Hi , I am new to this forum and new to Oracle forms i have many doubts to be cleared My very first Query is to know what are the steps to be done to invoke the calender to a form my Second one as follows when i select a date from the calender it shou

  • Spam in private message

    I have received a spam private message from a new user (you know, the usual message we normally receive via e-mail: you have a good profile get in contact with me write to me at xxxxxxx). The user appears to have posted anything in public boards. I w