Performance management with ESS and no MSS??

Hello Gurus,
I have a very peculiar scenario regarding and ESS/MSS.
Let me first post our requirments:
We have implemented all modules of HR along with OM and now in the process of implementing Performance Management along with ESS only . I am stressing on ESS only here with NO MSS functionality.
Now my question is..is it really possible in the first place to have a performance management functionality without MSS and just with standalone ESS? If yes, what are the options that I have to configure the performance management system and the ESS system so that the workflows for Planning, Rating an employee, and Approvals regarding appraisals can be achieved??
Is it possible to assign an ESS role to the manager and then when the manager logs in ESS, he/she can go ahead with the performance management process and can check employees just like the way it happens in MSS?
Thanks a lot for your time.
Best Regards.
Karan.

Hi,
In your case check for HR administrator for final ratings and other things also check for r/3 desktop services.
but why they are not going for mss, om is in place right
regards
rafi

Similar Messages

  • [URGENT] Performance problem with BC4J and partioned data

    Hi all,
    I have a big performance probelm with BC4J and partitioned data. As as partitioned table shouldn't have a primary key like a sequence (or something else) my partitioned table doesn't have any primary key.
    When I debug my BC4J application I can see a message showing me "ignoring row with no primary key" from EntityCache. It takes a long time to retrieve my data even if I use the partition keys. A quick & dirty forms application was multiple times faster!
    Is this a bug in BC4J, or is BC4J not suitable for partitioned data? Can anyone give me a hint what to do, do make the BC4J application fast even with partitioned data? In a non-partitioned environment the application works quite well. So it seams that it must be an "error" somewhere in this part.
    Thanks,
    Axel

    Here's a SQL statement that creates the table.
    CREATE TABLE SEARCH
    (SEAR_PARTKEY_DAY              NUMBER(4)        NOT NULL
    ,SEAR_PARTKEY_EMP            VARCHAR2(2)      NOT NULL
    ,SEAR_ID                     NUMBER(20)       NOT NULL
    ,SEAR_ENTRY_DATE             TIMESTAMP        NOT NULL
    ,SEAR_LAST_MODIFIED            TIMESTAMP             NOT NULL
    ,SEAR_STATUS                 VARCHAR2(100)    DEFAULT '0'
    ,SEAR_ITC_DATE               TIMESTAMP        NOT NULL
    ,SEAR_MESSAGE_CLASS          VARCHAR2(15)     NOT NULL
    ,SEAR_CHIPHERING_TYPE        VARCHAR2(256)   
    ,SEAR_GMAT                   VARCHAR2(1)      DEFAULT 'U'
    ,SEAR_NATIONALITY            VARCHAR2(3)      DEFAULT 'XXX'
    ,SEAR_MESSAGE_ID             VARCHAR2(32)     NOT NULL
    ,SEAR_COMMENT                VARCHAR2(256)    NOT NULL
    ,SEAR_NUMBER_OF              NUMBER(3)        NOT NULL
    ,SEAR_INTERCEPTION_SYSTEM    VARCHAR2(40)    
    ,SEAR_COMM_PRIOD_H           NUMBER(5)        DEFAULT -1
    ,SEAR_PRIOD_R                  NUMBER(5)        DEFAULT -1
    ,SEAR_INMARSAT_CES           VARCHAR2(40)    
    ,SEAR_BEAM                   VARCHAR2(10)    
    ,SEAR_DIALED_NUMBER          VARCHAR2(70)    
    ,SEAR_TRANSMIT_NUMBER        VARCHAR2(70)    
    ,SEAR_CALLED_NUMBER          VARCHAR2(40)    
    ,SEAR_CALLER_NUMBER          VARCHAR2(40)    
    ,SEAR_MATERIAL_TYPE          VARCHAR2(3)      NOT NULL
    ,SEAR_SOURCE                 VARCHAR2(10)    
    ,SEAR_MAPPING                VARCHAR2(100)    DEFAULT '__REST'
    ,SEAR_DETAIL_MAPPING         VARCHAR2(100)
    ,SEAR_PRIORITY               NUMBER(3)        DEFAULT 255
    ,SEAR_LANGUAGE               VARCHAR2(5)      DEFAULT 'XXX'
    ,SEAR_TRANSMISSION_TYPE      VARCHAR2(40)    
    ,SEAR_INMARSAT_STD           VARCHAR2(1)     
    ,SEAR_FILE_NAME              VARCHAR2(100)    NOT NULL
    PARTITION BY RANGE (SEAR_PARTKEY_DAY, SEAR_PARTKEY_EMP)
      PARTITION SEARCH_MAX VALUES LESS THAN (MAXVALUE, MAXVALUE) MIRA4_SEARCH_EVEN
    );of course SEAR_ID is filled by a sequence but the field is not the primary key as it would decrease the performance of partitioned data.
    We moved to native JDBC with our application and the performance is like we never expected to be!

  • Performance management with ONLY ESS and NO MSS??

    Hello Gurus,
    I have a very peculiar scenario regarding and ESS/MSS.
    Let me first post our requirments:
    We have implemented all modules of HR along with OM and now in the process of implementing Performance Management along with ESS only .  I am stressing on ESS only here with NO MSS functionality.
    Now my question is..is it really possible in the first place to have a performance management functionality without MSS and just with standalone ESS?  If yes,  what are the options that I have to configure the performance management system and the ESS system so that the workflows for Planning, Rating an employee, and Approvals regarding appraisals can be achieved??
    Is it possible to assign an ESS role to the manager and then when the manager logs in ESS, he/she can go ahead with the performance management process and can check employees just like the way it happens in MSS?
    Thanks a  lot for your time.
    Best Regards.
    Karan.

    Dear Karan,
    Firstly, it is not mandatory to implement MSS for approvals/notifications...
    Manager in MSS term is used to refer Head of Unit (in general) and not the immediate supervisor, i.e, relation ship (012).
    UWL - Universal Work List is the place where all work items for approval is listed and this can be activated in ESS with out using MSS functionality also.
    Comming to Performance Managment: Yes it can be acheived. NO MSS is required for Appraisals.
    if you are using BSP applications, there are different pages available for employee view, and manager view..\
    all that u need to do is, use the appropriate pages in the links that you gonna provide in ESS>Performance Management.
    SAP has given provision to create your own notifications (workflow) in configurations. Create a Z Event and ask your ABAPer to send a notification based on that Z Event. This all is standard.
    Regards
    ...Sadhu

  • Performance management workflow appraisee and appraiser name to be passed in email

    I am working on the performance management workflow. When the manager chooses to require an employee self assessment then an email notification should be triggered to the employee (appraisee) notifying them of the self assessment to be taken.This is my requirement.
    I have copied the standard workflow WS12300113 and customized the workflow to send the notification to the appraisee. I have customized the business object APPR_DOC . I am using the custom event  and the event linkage is activated.
    The manager from MSS select the employee and workflow is triggered. The mail notification will send  a mail to  the appraisee(employee) ,stating
    Dear Appraisee Name,
    Please schedule a meeting for a Crucial Conversation on your Tasks & Targets Setting/Review with your Line Supervisor.
    Kindly update appraisal document with agreements from the meeting.
    Your Appraiser is Appraiser Name.
    Since i am new to workflow , confused with passing the  recipient type(employee) from container an help d the name of the manager and employee name marked in red.
    Kindly help  me out with this

    Hi Paul ,
    Thanks for the concern!
    I created a custom class taking import parameters as appr doc id and plan version.
    I am passing these parameters to HRHAP_APPER table  and fetching the extended id(field id)
    I am passing this extended id to bapi  "BAPI_EMPLCOMM_GETDETAILEDLIST ".
    Here  i am getting multiple entries . using subtype '0001'  i will getting username and
    with subtype '0010' i will get email address of the manager.
    Passing username fetched from above step into bapi HR_GETEMPLOYEEDATA_FROMUSER
    i will get the name of the manager.
    I HAVE DONE BINDING OF METHOD PARAMETERS WITH TASK CONTAINER .ALSO TASK CONTAINER ELEMENTS WITH WORKFLOW CONTAINER ELEMENTS.
    BUT MY CONCERN IS I AM NOT GETTING ANY VALUES FROM HRHAP_APPER TABLE .
    Can you help me with an alternative solution,where i can get user id based on appraisal id and plan version ??

  • Performance issue with calendar and applescript

    Hi Community,
    I have a performance issue using applescript and calendar with this script:
    tell application "Calendar"
              tell calendar "Cal"
                                            set theList to (get {summary, start date, end date, uid} of events)
    end tell
    end tell
    There are app. 700 events in the calendar "Cal". Therefore the get-command takes about 15 seconds. The problem is, that iCal is completely blocked for this time. This means it is even not possilbe to scroll through the calendar. This problem occurs only under OS X 10.9. With OS X 10.8.x it is still possible to use calendar even a time-consuming get-command is processed.
    Any ideas? Maybe there is a way to reduce the task-priority of an applescript?

    I have to step in here...
    1) Must I set "None" or "On Time"
    - In order for the Calendar to fire an Alarm, it must know what time to fire the alarm. In the event of an All Day Event, it will go off at 12am. The option for "Repeat", below the "Alarm", states the frequency of the event (Daily, Weekly, Monthly, Yearly, etc). So to set an alarm that fires once a month, set the TIME you want the alarm to go off (Make sure "All Day" is unchecked if you want a specific time), then choose "On Time" for the "Alarm", and one of the several "Monthly" options for "Repeat". If I missed something in what you were asking, please let me know and I will do my best to more directly answer your question.
    2) Calendar cannot sync with the Mac.
    - Not directly. However, your phone automatically syncs with your Google Calendar, set up if you create your account. If you so choose, you may export your iCal calendar, import it into your Google calendar, and then use your Google calendar (http://calendar.google.com) to manage your agenda. The changes sync automatically with your device.
    Once again, I hope this shed some light on things. To the Verizon rep who originally answered this question: I have no intention to bash you, however please bear in mind that your opinions and comments will always be held in higher regard than mine, so if you choose to answer a question, please try to solve the problem as opposed to just answer the question. I have experience with all manner of devices and operating systems, from WebOS to BB to iOS to Android, and I believe this phone has the best hardware coupled with a solid operating system in TouchWiz, and I don't want to see people frustrated with these devices by questions that get nothing more than, "You can't do that" answers from the people that are expected to support them.

  • Replacimg performance Manager with SSM(SAP Strategy management)

    Hi,
    Can any body tell me  how to migrate all the scorecards, KPIs  based on Performance Manager to SAP Strategy Management. Is there is a tool for that in SSM if not should those be rebuilt in SSM from the scratch.
    how do we replicate everything fromPM to  SSM
    From the developer perspective what will their tasks.
    Edited by: chrisbaker1999 on Jun 16, 2011 1:01 PM

    Chris,
    What the External Data Loader does is to allow you to transfer your scorecard structure from Performance Manager to Strategy Management. What you are importing is the structure of Contexts, Perspectives, Objectives and KPI aspects. It allows you to put all of these in the loader, verify the structure in the loader, and then using Transport to bring into SSM.
    There are additional definitions and settings that SSM requires that will have to be added after you Transport the loader's data.
    You will still have to build PAS models separately, because the External Data Loader does not build models.
    The Transporter tool in SSM allows you to export (and then import) your scorecard metadata from DEV to PROD. Moving between systems is a two-step process, and Transporter is step one. PAS is separate (step two) and you make a dump of the model in your DEV system, take that dump file from the Home directory in DEV and move it to the Home directory in PROD. In the PROD system you create a new PAS model frame and then load that dump into the new framework.
    Each SAP customer has a super-administrator that can add users to the SAP Service Marketplace. Find out who your super-admin is and get yourself added to allow you access to those areas on Service Marketplace.
    Regards,
    Bob

  • Performance problems with XMLTABLE and XMLQUERY involving relational data

    Hello-
    Is anyone out there using XMLTABLE or XMLQUERY with more than a toy set of data? I am running into serious performance problems tyring to do basic things such as:
    * Combine records in 10 relational tables into a single table of XMLTYPE records using XMLTABLE. This hangs indefinitely for any more than 800 records. Oracle has confirmed that this is a problem and is working on a fix.
    * Combine a single XMLTYPE record with several relational code tables into a single XMLTYPE record using XMLQUERY and ora:view() to insert code descriptions after each code. Performance is 10 seconds for 10 records (terrible) passing a batch of records , or 160 seconds for one record (unacceptable!). How can it take 10 times longer to process 1/10th the number of records? Ironically, the query plan says it will do a full table scan of records for the batch, but an index access for the one record passed to the XMLQUERY.
    I am rapidly losing faith in XML DB, and desparately need some hints on how to work around these performance problems, or at least some assurance that others have been able to get this thing to perform.

    <Note>Long post, sorry.</Note>
    First, thanks for the responses above. I'm impressed with the quality of thought put into them. (Do the forum rules allow me to offer rewards? :) One suggestion in particular made a big performance improvement, and I’m encouraged to hear of good performance in pure XML situations. Unfortunately, I think there is a real performance challenge in two use cases that are pertinent to the XML+relational subject of this post and probably increasingly common as XML DB usage increases:
    •     Converting legacy tabular data into XML records; and
    •     Performing code table lookups for coded values in XML records.
    There are three things I want to accomplish with this post:
    •     Clarify what we are trying to accomplish, which might expose completely different approaches than I have tried
    •     Let you know what I tried so far and the rationale for my approach to help expose flaws in my thinking and share what I have learned
    •     Highlight remaining performance issues in hopes that we can solve them
    What we are trying to accomplish:
    •     Receive a monthly feed of 10,000 XML records (batched together in text files), each containing information about an employee, including elements that repeat for every year of service. We may need to process an annual feed of 1,000,000 XML records in the future.
    •     Receive a one-time feed of 500,000 employee records stored in about 10 relational tables, with a maximum join depth of 2 or 3. This is inherently a relational-to-XML process. One record/second is minimally acceptable, but 10 records/sec would be better.
    •     Consolidate a few records (from different providers) for each employee into a single record. Given the data volume, we need to achieve a minimum rate of 10 records per second. This may be an XML-only process, or XML+relational if code lookups are done during consolidation.
    •     Allow the records to be viewed and edited, with codes resolved into user-friendly descriptions. Since a user is sitting there, code lookups done when a record is viewed (vs. during consolidation) should not take more than 3 seconds total. We have about 20 code tables averaging a few hundred rows each, though one has 450,000 rows.
    As requested earlier, I have included code at the end of this post for example tables and queries that accurately (but simply) replicate our real system.
    Why we did and why:
    •     Stored the source XML records as CLOBS: We did this to preserve the records exactly as they were certified and sent from providers. In addition, we always access the entire XML record as a whole (e.g., when viewing a record or consolidating employee records), so this storage model seemed like a good fit. We can copy them into another format if necessary.
    •     Stored the consolidated XML employee records as “binary XML”. We did this because we almost always access a single, entire record as a whole (for view/edit), but might want to create some summary statistics at some point. Binary XML seemed the best fit.
    •     Used ora:view() for both tabular source records and lookup tables. We are not aware of any alternatives at this time. If it made sense, most code tables could be pre-converted into XML documents, but this seemed risky from a performance standpoint because the lookups use both code and date range constraints (the meaning of codes changes over time).
    •     Stored records as XMLTYPE columns in a table with other key columns on the table, plus an XMLTYPE metadata column. We thought this would facilitate pulling a single record (or a few records for a given employee) quickly. We knew this might be unnecessary given XML indexes and virtual columns, but were not experienced with those and wanted the comfort of traditional keys. We did not used XMLTYPE tables or the XML Repository for documents.
    •     Used XMLTABLE to consolidate XML records by looping over each distinct employee ID in the source batch. We also tried XMLQUERY and it seems to perform about the same. We can achieve 10 to 20 records/second if we do not do any code lookups during consolidation, just meeting our performance requirement, but still much slower than expected.
    •     Used PL/SQL with XMLFOREST to convert tabular source records to XML by looping over distinct employee IDs. We tried this outside PL/SQL both with XMLFOREST and XMLTABLE+ora:view(), but it hangs in both cases for more than 800 records (a known/open issue). We were able to get it to work by using an explicit cursor to loop over distinct employee IDs (rather than processing all records at once within the query). The performance is one record/second, which is minimally acceptable and interferes with other database activity.
    •     Used XMLQUERY plus ora:view() plus XPATH constraints to perform code lookups. When passing a single employee record, the response time ranges from 1 sec to 160 sec depending on the length of the record (i.e., number of years of service). We achieved a 5-fold speedup using an XMLINDEX (thank you Marco!!). The result may be minimally acceptable, but I’m baffled why the index would be needed when processing a single XML record. Other things we tried: joining code tables in the FOR...WHERE clauses, joining code tables using LET with XPATH constraints and LET with WHERE clause constraints, and looking up codes individually via JDBC from the application code at presentation time. All those approaches were slower. Note: the difference I mentioned above in equality/inequality constraint performance was due to data record variations not query plan variations.
    What issues remain?
    We have a minimally acceptable solution from a performance standpoint with one very awkward PL/SQL workaround. The performance of a mixed XML+relational data query is still marginal IMHO, until we properly utilize available optimizations, fix known problems, and perhaps get some new query optimizations. On the last point, I think the query plan for tabular lookups of codes in XML records is falling short right now. I’m reminded of data warehousing in the days before hash joins and star join optimization. I would be happy to be wrong, and just as happy for viable workarounds if I am right!
    Here are the details on our code lookup challenge. Additional suggestions would be greatly appreciated. I’ll try to post more detail on the legacy table conversion challenge later.
    -- The main record table:
    create table RECORDS (
    SSN varchar2(20),
    XMLREC sys.xmltype
    xmltype column XMLREC store as binary xml;
    create index records_ssn on records(ssn);
    -- A dozen code tables represented by one like this:
    create table CODES (
    CODE varchar2(4),
    DESCRIPTION varchar2(500)
    create index codes_code on codes(code);
    -- Some XML records with coded values (the real records are much more complex of course):
    -- I think this took about a minute or two
    DECLARE
    ssn varchar2(20);
    xmlrec xmltype;
    i integer;
    BEGIN
    xmlrec := xmltype('<?xml version="1.0"?>
    <Root>
    <Id>123456789</Id>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    <Element>
    <Subelement1><Code>11</Code></Subelement1>
    <Subelement2><Code>21</Code></Subelement2>
    <Subelement3><Code>31</Code></Subelement3>
    </Element>
    </Root>
    for i IN 1..100000 loop
    insert into records(ssn, xmlrec) values (i, xmlrec);
    end loop;
    commit;
    END;
    -- Some code data like this (ignoring date ranges on codes):
    DECLARE
    description varchar2(100);
    i integer;
    BEGIN
    description := 'This is the code description ';
    for i IN 1..3000 loop
    insert into codes(code, description) values (to_char(i), description);
    end loop;
    commit;
    end;
    -- Retrieve one record while performing code lookups. Takes about 5-6 seconds...pretty slow.
    -- Each additional lookup (times 3 repeating elements in the data) adds about 1 second.
    -- A typical real record has 5 Elements and 20 Subelements, meaning more than 20 seconds to display the record
    -- Note we are accessing a single XML record based on SSN
    -- Note also we are reusing the one test code table multiple times for convenience of this test
    select xmlquery('
    for $r in Root
    return
    <Root>
    <Id>123456789</Id>
    {for $e in $r/Element
        return
        <Element>
          <Subelement1>
            {$e/Subelement1/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement1/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement1>
    <Subelement2>
    {$e/Subelement2/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement2/Code]/DESCRIPTION/text()}
    </Description>
    </Subelement2>
    <Subelement3>
    {$e/Subelement3/Code}
    <Description>
    {ora:view("disaac","codes")/ROW[CODE=$e/Subelement3/Code]/DESCRIPTION/text() }
    </Description>
    </Subelement3>
    </Element>
    </Root>
    ' passing xmlrec returning content)
    from records
    where ssn = '10000';
    The plan shows the nested loop access that slows things down.
    By contrast, a functionally-similar SQL query on relational data will use a hash join and perform 10x to 100x faster, even for a single record. There seems to be no way for the optimizer to see the regularity in the XML structure and perform a corresponding optimization in joining the code tables. Not sure if registering a schema would help. Using structured storage probably would. But should that be necessary given we’re working with a single record?
    Operation Object
    |SELECT STATEMENT ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | NESTED LOOPS (SEMI)
    | TABLE ACCESS (FULL) CODES
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | SORT (AGGREGATE)
    | XPATH EVALUATION ()
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    With an xmlindex, the same query above runs in about 1 second, so is about 5x faster (0.2 sec/lookup), which is almost good enough. Is this the answer? Or is there a better way? I’m not sure why the optimizer wants to scan the code tables and index into the (one) XML record, rather than the other way around, but maybe that makes sense if the optimizer wants to use the same general plan as when the WHERE clause constraint is relaxed to multiple records.
    -- Add an xmlindex. Takes about 2.5 minutes
    create index records_record_xml ON records(xmlrec)
    indextype IS xdb.xmlindex;
    Operation Object
    |SELECT STATEMENT ()
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (GROUP BY)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | TABLE ACCESS (FULL) CODES
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | FILTER ()
    | NESTED LOOPS ()
    | FAST DUAL ()
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | SORT (AGGREGATE)
    | TABLE ACCESS (BY INDEX ROWID) SYS113473_RECORDS_R_PATH_TABLE
    | INDEX (RANGE SCAN) SYS113473_RECORDS_R_PATHID_IX
    | TABLE ACCESS (BY INDEX ROWID) RECORDS
    | INDEX (RANGE SCAN) RECORDS_SSN
    Am I on the right path, or am I totally using the wrong approach? I thought about using XSLT but was unsure how to reference the code tables.
    I’ve done the best I can constraining the main record to a single row passed to the XMLQUERY. Given Mark’s post (thanks!) should I be joining and constraining the code tables in the SQL WHERE clause too? That’s going to make the query much more complicated, but right now we’re more concerned about performance than complexity.

  • Performance issue with snapmirror and snapshots on target aggregate

    Hi all, does anyone of you has experience with snapmirror an larger amount of data. At the moment we do a snapmirror of about 100tb data distributed over about 10 volumes to a sata aggreagate on a second filer with 85 4tb sata disks (5x17disk raidgroup). Source is FAS 8040, target is FAS 8020 both with cdot 8.3p1. We already moved all workload from the target aggregate, so it hosts only snapmirror targets.On the source side we do 1 snapshot per day and keep 14 snapshots. Snapmirror is done once per day. From counting snapshots  I would say daily change rate is 2-2,5tb for all volumes. Snapmirror is working fine and finished in less than 2-3h, but container block reclamation and deswizzling is totally killing the aggregate on the target side. We do see continous load of 30MB read and disk util for all disks except parity disks is 90-100%. At first we planned 4h snapshot but that is just not possible. At the moment we disabled deswizzle and get to a point where if we are lucky the target aggregate load drops in the night just before next snapmirror kicks in. We are quite new to Netapp but it sounds ridiciolous, that you need so much io for just a plain replication and some snaps.Do you have any experience with snapshots and snapmirror using sata disks? I think snapshots and snapmirror on Netapp are very resource demanding. It is true that the creation of snapshots on Netapp is super efficient and instant but as soon as snapshot has to be deleted container block reclamation kicks in and takes large amount of disk resource. Same for snapmirror, it is really cool and stable, but deswizzling for logical to physical block mapping with large data affects snapmirror target performance heavily.  Best wishes, Stefan  

    Hi RPHELANIN, schedule is 24h. Yes from time to time dezwizzle does not finish, but 24h is our max, we planned with 4h. But I think this is just impossible with nl-sas disks unless you do not change data on source :-). We crossed checked deswizzle and container block reclamation with disabling each of them. Most of the load is produced by container block reclamation.The positive impact of flashcache is on deswizzle lower than on container block reclamation. I think most of NetApps internal workload is sized for 10k and 15k drives. If we compare io per gb of 10k 900gb drive with 7k 4tb drive we have a ratio of nearly 10:1. Mechanics like reclaim blocks or map virtual to physical blocks seem to produce to much load for nl-sas drives. On the othe hand deduplication and compression works fine and produces acceptable disk load. Nevertheless we disabled it cause it produces to much changed blocks for snapmirror.  Best wishes, Stefan

  • Training and event mangement Search functionality, linkage with ESS and MSS

    Hi ,
    I am implementing TEM (training and event management).. and for frontend application of TEM .. we r using HCM_LEARING BSP application.... but search functionlity is not working.. i checked with EP guy... he told me that TREX is not installed..
    can u please tell me .. is TREX required for TEM search functionlity ?
    We tried to check through SE80 and then selected Search page and tested search functionlity, there also it is not working..
    How do we integrate TEM with MSS also.. client required standard functionlity. I am not getting any data for TEM in MSS?
    Kindly help me..
    Thanks and Best Regards
    Puneet

    HI,
    TREX is a search engine which is required for candidate searching.
    contact your basis people they will help you out in installing it.
    it works with RFCs' better you touch base abap wdp consultant also.
    cheers
    rafi

  • Release management with Azure and Visual Studio Online (Cloud TFS)

    What strategy would you use to manage the releasing of versioned software to Azure cloud services (web and worker roles)? We are not looking for continuous integration. We are using Visual Studio 2013 and Visual Studio Online (Cloud TFS).
    At one point, we were releasing straight from Visual Studio using the Azure Cloud Project Publish tool. This is really bad practice in my opinion as you can never be sure what you are really releasing. Additionally, there is no automated control on the labeling
    or branching of code, or the running of unit tests and code analysis checks.
    Next, we employed Release builds on Visual Studio Online. Before deployment, one would edit the appropriate Build Definition (whether it be for Test or Production by and filling in the code label (under the "Get Version" build property) that is to
    be released. This would then get the appropriate code (by the label specified), build it, and release it to whatever cloud service is specified in the targeted Cloud Project profile (this is using the AzureContinuousDeployment.11.xaml template).
    There is still a degree of manual intervention involved. Also, the fact that a version of code is built every time before it is released is not ideal (as far as I understand it would be better if it was packaged once).
    Microsoft Release Management tools
    look ideal for the job, but are not supported with Visual Studio Online.
    Is there a better way of handling our releases?

    /waves hand.. These are not the tools you seek. You are looking for continuous integration.
    Although CI has the word continuous in there, it does not mean "all the time, every checkin". It can easily refer only to those bits you want to release - -and the way to tell the system which bits you want released is to merge them to a Releases
    branch.
    If you do this, not only do you get all the joy of controlled CI, but you guarantee what you release is exactly what is controlled in your SCM - under the Releases branch, preferably tagged or otherwise noted as a particular release. That means you can also
    rollback to a previous release by simply reverting to a previous release in your SCM!
    Of course you don't have to let it happen automatically, you can set it up to build 'continually' and then remove the check on the SCM to see if any changes have been committed. You can replace this with the manual build button.

  • Best file manager with SMB and streaming support? Time Capsule access?

    Constantly having to jump through hoops it seems, because of no file manager in iOS. Another simple scenario: have 2 disks connected to my Apple Time Capsule. Can accesss them with Android phones, always in trouble finding a good app on iPhone. Any tips? Wish I could also stream my home videos, etc from these disks. Documents 5 (Readdle) can access them via SMB, but not stream, which is nasty for large videos.
    Thank you,
    Roman

    I was experiencing the same problem when I upgraded to a Time Capsule. To fix the problem remove all personal file sharing port mapping settings. Then make sure that the enable  file share, and enable over WAN is on and checked for the TC, then update the TC. While that's updating go into the server app and remove any old file sharing protocols from the TC on the server app. Next add a new custom protocol by hitting the + name it file share and set the port to 140. hit the update button and wait for it to finish. Once that's finish go back to the airport utility and find the newly made file share protocol by selecting it under port mapping for the TC. If you don't have the newly made one in there make a new one your self. Name it the same as you did in the server app and set the public TCP ports to 140 and the private TCP ports to 548, make sure the private IP address is set to the same as your server in the reserve. Update the time capsule and you should be able to access both by putting in the appropriate afp://www.yourserver.com:140 for the server and afp://www.yourserver.com:548  for the TC. Have fun, lion server can be a little fussy!

  • Font management with InDesign and Windows 7

    Hello
    I am presently looking to upgrade to Windows 7 and use InDesign CS4.  I have had a play with installing postscript type 1 fonts using the windows font manager and it seems absoloutley average, but it worked (I think).  I was wondering if you had any suggestions of what would be best practice in this setup.
    Thanks for your time

    Thanks for that.  I spoke to the team using indesign and they would love to move to Extensis Suitcase but they had experienced problems with MathMagic pro.  I am going to test this out and hopefully resolve it but I wondered if you had any knowledge of compatibility issues?
    Thanks again

  • Memory Management with NSString and synthesized properties

    I thought I understood memory management but now I'm getting some odd behavior and that's the only thing I can see that I might be doing incorrectly:
    I have a synthesized NSString propery called displayText. At one point I attempt to set a label with the displayText property (label.text = [[note displayText]]). The first one works, but the second one does not:
    [note setDisplayText:[[note fileName]]]; // - works
    [note setDisplayText:[[self getCharacterInFileName:[[note fileName]]]]; // - does not work
    And here is getCharacterInFileName:
    // Given the fileName for the image, get the specific part of that fileName for the note letter
    - (NSString *)getCharacterInFileName:(NSString *)fileName
    NSRange range = {1,1};
    NSString *characterInString = [[[NSString alloc]] initWithString:[[fileName substringWithRange:range]]];
    return characterInString;
    Is there some pointer mismanagement going on here that I'm missing?
    Message was edited by: darkpegasus

    Nevermind, I'm an idiot. I forgot to close an if/else block with a brace and it screwed everything up.

  • Performance degradation with COGNOS and BW

    Hello,
    Do you know how to increase performance when using Cognos to request in BW ? Cognos seems to need a lot of RAM.
    Thanks for your help
    Catherine Bellec

    In your original compile you don't use any optimisation flags, which tells the compiler to do minimal optimisation - you're basically telling the compiler that you are not interested in performance. Adding -g to this requests that you want maximal debug. So the compiler does even less optimisation, in order that the generated code more closely resembles the original source.
    If you are interested in debug, then -g with no optimisation flags gives you the most debuggable code.
    If you are interested in optimised code with debug, then try -O -g (or some other level of optimisation). The code will still be debuggable - you'll be able to map disassembly to lines of source, but some things may not be accessible.
    If you are using C++, then -g will in SS12 switch off front-end inlining, so again you'll get some performance hit. So use -g0 to get inlining and debug.
    HTH,
    Darryl.

  • Vlan management with WLC and WCS

    I'd like to know if it is possible to use the same vlan for the management of the WCS and for configuring a wlan?
    I try to make this lab and when I declare a dynamic interface that is in the same subnet as the WCS ip address, the reliability between controler and WCS is lost.

    I know that I should not put servers on the same vlan as wireless client but I just want to know if it is possible or if Cisco implemented something to avoid this to understand why my lab didn't work with this configuration.
    Thanks

Maybe you are looking for

  • Applications have started quitting suddenly

    Over the last couple of weeks or so a number of applications have started to frequently crash for no apparent reason, the only significant (for me at least!) thing that I changed on my Mac over that period is deleting and reinstalling the 3ivx progra

  • AdfPage.PAGE.findComponent FAILED with WebCenter

    Dear all, I have to get in javascript a component with id="pdfBtn" BUT I have to set the parameter r1:0:pdfBtn => AdfPage.PAGE.findComponent('r1:0:pdfBtn') It works when I develop a task flow and manage by myself, but it failed when I add this into W

  • Mailx

    How to send mail from unix machine this is my syntax mailx -s "test" [email protected] after typing the above command, Just it is blinking on the next screen whehter I have to press any key to send any help would be highly appreciated

  • Where can I find a sample of application with textboxs ,buttons and ..

    Hi, I'm new to Java and I'd like to know where can I find a sample of application with textbox, buttons and accessing a database (Oracle or MS-Access) something like a commercial application to start my knowledge in Java. I'm a developer and I use Vi

  • High CPU Usage on Cisco 3845

    Hi all, I'm having high CPU usage with one of my Cisco 3845. It works as an IP-IP Gateway and the CPU is quite high when the total number of calls only around 100-200 calls. I check the CPU usage with "show process cpu sort" and it looks like there a