Direct mapping of Siebel OM context to Oracle session attributes

Hello all,
I wonder if it is possible to direct map of Siebel OM context to Oracle session attributes. Have anyone experience on this? If yes in which version of Siebel? And if this is applicable then what about in case of using connection pooling? Is there any documentation?
For example it would be very useful if we could have information about Siebel tasks in the fields of V$SESSION.
Module = “BUSCOMP NAME”
Action = “VIEWNAME”
Client Info = “TASKID”, “SERVERNAME”, “USERID”, “COMPONENT NAME”
Thank you,
Peter

Hi,
Our Siebel profiling Tools do exactly what you want. They work on versions 7.7, 7.8, 8.0 and 8.1.
Also, in Siebel 8.1 and above there is now a bind variable :1 that provides this context information.
R
Robert Ponder
Lead Architect and Director
Ponder Pro Serve
cell: 770.490.2767
fax: 770.412.8259
email: [email protected]
web: www.ponderproserve.com

Similar Messages

  • Context in Oracle 8i

    I have seen (some time ago), that it is possible to use Context in Oracle 8i instead of InterMedia
    Is it possible - and where can I read about it and where can I get Context ?
    thanks
    AOL

    It's not possible.
    interMedia Text is an (abrupt) evolution of context, which involved tight integration with the RDBMS. It is much faster, and has a much smaller API, and migration is encouraged.
    null

  • Autofs direct mapping not working

    Hi there,
    I'm currently migrating from Gentoo and hit a problem. I'm trying to use an autofs feature called 'Direct mapping'. It allows one to use map one directory to a specific server. Combined with ghosting, one can use this to do a on demand bind setup. This is useful if the target is not always available (like an encrypted disk that is not active).
    Relevant auto.master snippet:
    /etc/autofs/auto.bind_1 \
    -fstype=auto,--timeout=5 \
    defaults,nodev,noexec,nomand,nosuid,noiversion,\
    noatime,bind
    and the full contents of auto.bind_1:
    /home/gedeeld/secure :/home/secure/gedeeld
    /opt/32bit_chroot/home/secure/ronald :/home/secure/ronald
    Automounter just won't list it:
    [root@Alpha autofs]# automount -f -v
    Starting automounter version 5.0.7, master map auto.master
    using kernel protocol version 5.02
    syntax error in nsswitch config near [ syntax error ]
    mounted indirect on /var/autofs/cifs with timeout 5, freq 2 seconds
    ghosting enabled
    mounted indirect on /var/autofs/dev with timeout 10, freq 3 seconds
    ghosting enabled
    This exact same configuration file used to work with autofs on Gentoo. I'm at deads end why it's not working under Arch. Autofs documentation is sparse, especially with respect to 'Direct mapping'.

    http://bugs.archlinux.org/task/6936

  • Mapping IDOC to XML - Context problem

    Hello,
    I am trying to map up to 2 segments from an ORDERS IDOC into one segment in my PO XML. 
    In the target PO XML document both segments UltimateCustomer and ParticipatingDealer are under the heading SalesOrderData.
    <Header>
    ...<SalesOrderData>
    .......<UltimateCustomer>
    ............<Name>
    ............<address>
    ....................<street>
    ....................<city>
    ......<ParticipatingDealer>
    ............<Name>
    ............<address>
    ....................<street>
    ....................<city>
    So I only create the SalesOrderData segment if at least one of the two E1EDKA1 segments exists where PARVW = ZO or ZU then I want to repliate those segments into their target fields.
    ZO - Ultimate Customer
    ZU - Participating Dealer
    When both values exist I am only getting the data from the first segment (ZO or ZU) but not both.  With the following mapping.
    If PARVW (IDOC Context) = ZO or PARVW (IDOC Context) = ZU ...CreateIf...SalesOrderData
    If PARVW = ZO...CreateIf...UltimateCustomer.
    If PARVW = ZU ...CreateIf...ParticipatingDealer.
    I tried many context combinations to get both segments to appear but the only time I could get them both to appear is if I force the creation of SalesOrderData.  (Constant[]...SalesOrderData).  This of course creates a blank node when ZO and ZU do not exist.  Which is not a good solution.
    Any suggestions?
    Thanks,
    Matt

    chk this:
    Mapping of "Header":
    PARVW(context IDOC)----
    --------------------equals ----
    Constant(ZO)
    -------------------------------------OR-----IfWithoutElse(Constant()----then)--------collapseContext------HEADER
    PARVW(context IDOC)----
    --------------------equals ----
    Constant(ZU)
    Mapping of "UltimateCustomer":
    PARVW----
    --------------------equals ----IfWithoutElse(Constant()----then)-----removecontext---------UltimateCustomer
    Constant(ZO)
    Mapping of "ParticipatingDealer":
    PARVW----
    --------------------equals ----IfWithoutElse(Constant()----then)-----removecontext---------ParticipatingDealer
    Constant(ZU)
    Mapping of "name" under "UltimateCustomer":
    PARVW----
    ---------------equals ----IfWithoutElse(Name(from source)---then)--removecontext----splitbyvalue(each value)--Name
    Constant(ZO)
    In the same way (like "Name" field mapping) proceed for the rest of the target fields....

  • Benefits of context in Oracle 11g

    I spent a lot of time doing R&D on benefits of context but was somehow was not able to really understand the benefit of context in Oracle 11g. I would try to explain with an example, I have seen an example of database session-based application context where an ‘after logon database’ was using a context to raise_application_error if the user logged in was not in the employees table. I was wondering that the same thing can be implemented without the context to raise_application_error if the user logged in was not authorized.
    I hope I have been able to explain my question properly about the exact benefits of contexts,
    Please revert with the reply to my query.
    Regards

    I have seen an example of database session-based application context where an ‘after logon database’ was using a context to raise_application_error if the user logged in was not in the employees table.
    Great! Post a link to what you 'saw' so we can see it to and get the FULL CONTEXT of what it ws doing and how it was doing it.
    I was wondering that the same thing can be implemented without the context to raise_application_error if the user logged in was not authorized.
    We have no way of knowing until we know WHAT the code you 'have seen' was doing and HOW it was doing it.
    Maybe the same thing can be done without using contexts. But most examples are just that: examples. They try to show you the MINIMUM that must be done to illustrate what the functionality can do.
    They often do NOT illustrate or discuss in detail WHEN to use contexts or WHAT problems contexts are most suited for.
    First - we need to know WHAT contexts are. For that the Oracle docs can be used:
    http://docs.oracle.com/cd/B28359_01/network.111/b28531/app_context.htm
    What Is an Application Context?
    An application context is a set of name-value pairs that Oracle Database stores in memory. The application context has a label called a namespace, for example, empno_ctx for an application context that retrieves employee IDs. Inside the context are the name-value pairs (an associative array): the name points to a location in memory that holds the value. An application can use the application context to access session information about a user, such as the user ID or other user-specific information, or a client ID, and then securely pass this data to the database. You can then use this information to either permit or prevent the user from accessing data through the application. You can use application contexts to authenticate both database and nondatabase users.
    Note that the ONLY possible uses mentioned are authentication related - so more info is needed that just that intro.
    Think of an application context as a global variable that holds information that is accessed during a database session
    More info - global variables - available ANYTIME during a session. Note in particular it says NOTHING about any packages/functions/procedures being needed. Hmm - maybe we can use those 'global variables' in SQL queries?
    But you ask about benefits - the doc has a section for that:
    Benefits of Using Application Contexts
    Most applications contain the kind of information that can be used for application contexts. For example, in an order entry application that uses a table containing the columns ORDER_NUMBER and CUSTOMER_NUMBER, you can use the values in these columns as security attributes to restrict access by a customer to his or her own orders, based on the ID of that customer.
    Application contexts are useful for the following purposes:
      Enforcing fine-grained access control, for example, in Oracle Virtual Private Database polices
      Preserving user identity across multitier environments
      Enforcing stronger security for your applications, because the application context is controlled by a trusted procedure, not the user
      Increasing performance by serving as a secure data cache for attributes needed by an application for fine-grained auditing or for use in PL/SQL conditional statements or loops
    This cache saves the repeated overhead of querying the database each time these attributes are needed. Because the application context stores session data in cache rather than forcing your applications to retrieve this data repeatedly from a table, it greatly improves the performance of your applications.
      Serving as a holding area for name-value pairs that an application can define, modify, and access
    There are some benefits like you ask about
      Global application contexts.This type retrieves data that is stored in the System Global Area (SGA) so that it can be used for applications that use a sessionless model, such as middle-tier applications in a three-tiered architecture. A global application context is useful if the session context must be shared across sessions, for example, through connection pool implementations.
    "Using Global Application Contexts" describes this type.
    Interesting! This means you can truly have GLOBAL variables that are accesible by ALL USERS ACROSS SESSIONS.
    How would you do that using other functionality?
    The doc goes on to give other examples of 'benefits:
    This section contains:
      About the Package That Manages the Database Session-Based Application Context
      Using SYS_CONTEXT to Retrieve Session Information
      Using Dynamic SQL with SYS_CONTEXT
      Using SYS_CONTEXT in a Parallel Query
      Using SYS_CONTEXT with Database Links
      Using DBMS_SESSION.SET_CONTEXT to Set Session Information
    What about that SYS_CONTEXT function? That is in the Sql Language Doc
    http://docs.oracle.com/cd/E11882_01/server.112/e41084/functions184.htm
    Purpose
    SYS_CONTEXT returns the value of parameter associated with the context namespace at the current instant. You can use this function in both SQL and PL/SQL statements. SYS_CONTEXT must be executed locally.
    For namespace and parameter, you can specify either a string or an expression that resolves to a string designating a namespace or an attribute. If you specify literal arguments for namespace and parameter, and you are using SYS_CONTEXT explicitly in a SQL statement—rather than in a PL/SQL function that in turn is in mentioned in a SQL statement—then Oracle Database evaluates SYS_CONTEXT only once per SQL statement execution for each call site that invokes the SYS_CONTEXT function.
    More benefits - contexts and the function can be used in 'both SQL and PL/SQL statements'. And because a SQL statement in PL/SQL that uses SYS_CONTEXT can be parsed at compile time that means that contexts are essentially treated as BIND variables and can dramatically improve performance in PL/SQL code of both standard SQL statements and DYNAMIC sql statements.

  • EclipseLink Direct Map with Joining mapping issue

    Hi EclipseLink Team,
    We encountered an issue to map an attribute as Direct Map with Join Fetch enabled in EclipseLink Workbench 1.1.1 (Build v20090430-r4097).
    Basically, we have the following data model:
    SESSION
    SESS_NO (PK),
    PARAM_SESS
    SESS_NO (FK) (PK),
    PARAM_NAME (PK),
    PARAM_VALUE
    Then we have Session.java persistent entity associated with SESSION table. This class contains Map attribute sessionParameters which we map as Direct Map with PARAM_SESS.PARAM_NAME as key and PARAM_SESS.PARAM_VALUE as value (referenced by PARAM_SESS.SESS_NO = SESSION.SESS_NO).
    The described mapping works fine without Joining enabled (both for Lazy and Eager Loading).
    But we have cases when we want to get parameters for a number of sessions. And disabling of Joining leads to a number of performed SELECT SQL queries (both for Lazy and Eager Loading) those are bad for performance.
    So we have chosen the Eager Loading and set the Join Fetch option to Outer. Then we have got the following:
    I see in the log the only SQL query performed: SELECT DISTINCT t0.SESS_NO, …, t1.PARAM_VALUE, t1.PARAM_NAME FROM {oj SESSION t0 LEFT OUTER JOIN PARAM_SESS t1 ON (t1.SESS_NO = t0.SESS_NO)}. This is pretty good for us. This query returns exactly what we expect when executing on the database. But the Map attribute for every session is populated incorrectly: Maps are empty (not corresponding to available relational data).
    Could you please let us know if this is a bug, or kind of known issue or we made something wrong? Some hints and proposals would be very helpful and appreciated.
    I should mention that for now we want to map all of this for read only purpose.
    Thanks.
    Best Regards,
    Alexey Elisawetski

    James,
    I've tried both 1.2 release and 2.0 (v20091121-r5847) but received the same result - empty Map.
    Moreover, for both versions the following string was absent in deployed XML file:
    +<direct-key-field table="PARAM_SESS" name="PARAM_NAME" xsi:type="column"/>+
    Therefore, on application initialization I have got an exception: org.eclipse.persistence.exceptions.DescriptorException with message This descriptor contains a mapping with a DirectMapMapping and no key field set.
    So I was forced to add the line manually.
    This seems buggy to me...
    Regards,
    Alexey

  • "System running low on direct mapped memory " Error on Console

    Hi,
    Our ACE Primary device got hang its not responded to management or console access. After rebooting the device everything is came to normal, but we are getting the following error from console.
    901001 <2>Dec 11 15:42:20 kernel: System running low on direct mapped memory    
    901001 <2>Dec 11 15:42:20 kernel: Please issue 'show processes memory' and 'show system kmem' to diagnose further
    901001 <2>Dec 11 15:42:21 kernel: Available CP memory reached below 5 percent threshold, TotalMemFree: 312201216 bytes, High MemFree: 305004544 bytes
    Kindly suggest why this error message is generatting and it will effect the box performance.
    SoftWare version : A4(2.0)

    Hi Ranjith,
    It does seem to be  the DDTS you are hitting. Please discuss with your account team and upgrade for the fix.
    You may also be interested in related DDTS: Both of the below DDTS are related.  Reload of the device fixes the issue but only for a while. You may have to reload when you see this message again and available memory is below 15%.
    CSCtx41399
    CSCtz29117
    Regards,
    Kanwal

  • Which object-relational mapping tool is the best for Oracle Coherence?

    Which object-relational mapping tool is the best for Oracle Coherence?
    My application is read-and-write-intensive. Which tool is most suitable for this application?
    TopLink essentials, TopLink, Eclipse or Hibernate?
    Thank you

    I would pick Hibernate mainly because of its popularity and wide knowledge base.
    Coherence has provided some documentation for the integration.
    http://download.oracle.com/docs/cd/E14526_01/coh.350/e14537/usehibernateascoh.htm#CEGFEFJH
    If you have the schema in database, myEclipse can provide you hibernate bindings by reverse engineering.

  • Direct mapping in updates rules not functioning as expected

    Hi experts,
      I have a character string with only digits (NUMC) which is mapped in udpate rules of a ODS. Its a direct mapping from another ODS. Inspite of that, we are having scenarios where the source ODS is filled and the character in the destination ODS is coming as empty. The source ODS is looking up that value from another ODS.
    I am wondering why is it so. I am in 3.5 enviornment.

    Is it just this field coming up as blank or other fields too,
    Do you any start routine or any kind of filtering in place in the update rules.
    Just try to simulate a update and see if you can find anything.
    /people/pavan.bayyapu/blog/2006/09/06/debugging-data-upload-in-bw
    Also make sure you "No Update" is not selected in update rules for this field.
    Edited by: sam hennry on Mar 28, 2008 1:15 PM
    Edited by: sam hennry on Mar 28, 2008 1:17 PM

  • Performance tuning for siebel CRM application on oracle database

    Hi,
    Please send me the link for Performance tuning for siebel CRM application on oracle database. If there are any white papers please send me the link.
    Thanks,
    Rajesh

    Hi,
    This metalink document is very useful, if you have any other documents or links please inform me.
    Thanks once again
    Rajesh

  • Two-directional Mapping

    I need to make use of a data-structure for mapping an integer with instance of Object and vice-versa. Hashtable would be an ideal solution, but first time I need to map the integer to Object and the second time I have to map back the Object to integer. So we can call it two-directional mapping. We could use Hashtable, but it would be one directional mapping, right? Also it might not be v efficient if the table is large...
    Data-structure that I need should provide me functionality something like this:
    Object obj1 = data-structure.get(intValue1);
    Object obj2 = processObj(obj1);
    Integer intValue2 = data-Structure.get(obj2);
    I would like to know if there is any data-structure that Java provides for this two-directional mapping. If not, then what is the best way to go abt it?
    Thanks.

    Please don't cross-post.
    http://forum.java.sun.com/thread.jspa?threadID=627170

  • Direct mapping in interface

    hi experts,
    could you plz tell what is direct mapping in initerfaces.
    regards,
    siri.

    hi
    good
    go through these links
    http://help.sap.com/saphelp_nw04/helpdata/en/9b/c1993c54966f3ae10000000a114084/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/9b/c1993c54966f3ae10000000a114084/content.htm
    thanks
    mrutyun

  • No. of Oracle sessions

    Hi All,
    We are running oracle 8i on sun sparc system solaris 2.6. Apps uses OCI.
    There are about 500 users which runs a single exectable say ABC whose size is 14M. We have a RAM of abour 10G. Now if all these users loged in starts the executables ABC it will crash our system immediately.
    Can anybody help me in making it better on memory or any other we can improve it.
    Thanks and regards,
    Kshitij

    1. There is a multithreaded application ABC. Which is being started by all the users say 500. Now as soon as the excuatable is started it connects to the Oracle or sets up a oracle session two of them one for OCI related activity and other for Pro*c related activity. Now you may be asking why we are using both, well its a long story and that code has been never touch for about 6 years now. So we have to live with that. Here is what we have, when we do "top" it says we have left with say 12M space out of 8G (RAM)and it still keeps on running as we have only 60 users logged in. So that's why we did not had these problems yet. But I am expecting that.
    Currently we have the swap area at 2G only. How much I need to put the swap memoryfor this. We have a constraint on this? (Bad for us).
    So the first thing you need to do is to make the clients connect to the app instead of starting the app. In this way, you app can handle more than one client; and assuming you have a reentrant code the code can execute on behalf of any user.
    2. Can you be more elaborate on the OCI multiplexing sessions keeping in this the above point? That will help me a lot.
    In OCI, sessions and connections are distinct object. Connections map to a data pipe on which calls are routed to the server, ie in another sense it maps to a unit of scheduling on the server for a call. Sessions map to the user context in which a call is executed. For example the same select could return different results depending upon the user and its priveleges. Session can be moved between connections. This is called multiplexing or session pooling. This allows you to have state retained for each individual user (upto 500 in your case) while mapping them on to one or more connections. the number of connections you would need is at most the number of concurrent users.
    ProC and OCI can share the same connection and have different sessions. Creating a single multithreaded reentrant app to service multiple users allow you to use less resource on your machine.
    null

  • Total CPU time consumed by all Oracle sessions combined.

    Hello, I need to summarise total cpu time consumbed by all active Oracle sessions and below is the query thta I could come up with.
    However the problem: even if I execute query in a miliseconds duration, the value of total cpu time is always changing for a given session. Hence, is the query below right?
    SELECT SUM( c.value)
    FROM V$SESSION a,
    v$sesstat c,
    v$statname d
    WHERE a.sid = c.sid
    and c.statistic# = d.statistic#
    AND d.name = 'CPU used by this session'
    AND a.status = 'ACTIVE';
    Thanks,
    R

    Your findings are correct - the statistics is changing all the time. I don't see what is the problem here, you just have to add the time stamp and then you can measure the difference between them.
    If you are on 10g or 11g version then I would suggest you to look at V$SESS_TIME_MODEL which also gives interesting aggregated data for session level and for the database level the V$SYS_TIME_MODEL is probable the thing you are looking for.
    SQL> select stat_name,value from v$sys_time_model where stat_name='DB CPU';
    STAT_NAME                                                             VALUE
    DB CPU                                                           2125109375But I would like to warn you that there are some anomalies with time measurement in the database. Consider the following case:
    SQL> create or replace function f_cpu( p number) return number is
      2  m_date date;
      3  begin
      4    for a in 1..p loop
      5       m_date := sysdate;
      6    end loop;
      7    return 0;
      8  end;
      9  /
    Function created.
    SQL> alter session set events '10046 trace name context forever, level 8';
    Session altered.
    SQL> set arraysize 1
    SQL> select f_cpu(300000) from dual connect by level <= 10;
    F_CPU(300000)
                0
                0
                0
                0
                0
                0
                0
                0
                0
                0
    10 rows selected.
    SQL> alter session set events '10046 trace name context off';
    Session altered.From the other session you run your statement which just looks for the CPU usage in the first session. You will see that the CPU time is increased all the time, not just after every fetch call, but also between them.
    Now comes the funny thing - the SQL_TRACE shows the following ( I have removed all wait events to get more clear picture for my purpose):
    PARSING IN CURSOR #4 len=53 dep=0 uid=88 oct=3 lid=88 tim=347421772231 hv=2194474452 ad='230f2310' sqlid='2m7v6ua1cu1fn'
    select f_cpu(300000) from dual connect by level <= 10
    END OF STMT
    PARSE #4:c=0,e=124,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=347421772223
    EXEC #4:c=0,e=103,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=347421772446
    FETCH #4:c=671875,e=588199,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,tim=347422360764
    FETCH #4:c=1437500,e=1282871,p=0,cr=0,cu=0,mis=0,r=2,dep=0,og=1,tim=347423644892
    FETCH #4:c=1156250,e=1182013,p=0,cr=0,cu=0,mis=0,r=2,dep=0,og=1,tim=347424827769
    FETCH #4:c=1234375,e=1181276,p=0,cr=0,cu=0,mis=0,r=2,dep=0,og=1,tim=347426009910
    FETCH #4:c=1187500,e=1179318,p=0,cr=0,cu=0,mis=0,r=2,dep=0,og=1,tim=347427190068
    FETCH #4:c=593750,e=593217,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,tim=347427784188In this output "c=" is CPU time in 0.000001s (microseconds) and "e=" is elapsed time. There are many rows where CPU time > elapsed time, what is a well known problem of reporting for CPU timing. This was a single session so this result is unrealistic: how one can use 1.43s CPU time in only 1.28s time.
    FETCH #4:c=1437500,e=1282871,p=0,cr=0,cu=0,mis=0,r=2,dep=0,og=1,tim=347423644892The CPU time is bigger than elapsed time - and you can probably notice as well that CPU time is rounded to 00,25,50,75 .
    There are some interesting problems with time measuring in Oracle and one can easily be caught in this trap. I have written a post about this in my blog: http://joze-senegacnik.blogspot.com/2009/12/measurement-error-in-trace-file.html
    HTH, Joze
    Co-author of the forthcoming book "Expert Oracle Practices"
    http://www.apress.com/book/view/9781430226680
    Oracle related blog: http://joze-senegacnik.blogspot.com/
    Blog about flying: http://jsenegacnik.blogspot.com/
    Blog about Building Ovens, Baking and Cooking: http://senegacnik.blogspot.com

  • Unit Testing and Oracle Sessions

    We have an issue regarding Oracle session and SQL Developer Unit Testing.
    Every time we run a Unit Test in SQLDeveloper a new Oracle session is created. When closing that Unit Test, the session, however, is not disconnected.
    And the only way to close that "Unit Test" session is to close SQLDeveloper.
    This is causing problems with the no. of sessions available to developers.
    Any help would be much appreciated.
    Subboss

    The focus of this forum is report design. The actual testing of reports would be subject to your own internal policies and procedures.
    Jason

Maybe you are looking for

  • Error when trying to create a snapshot in OWB 9.2

    When I try to create a full snapshot in OWB 9.2 I get "ORA-00947: not enough values". I have tried this at the Projetc, Module and single Mapping level, always with the same result. ORA-00947: not enough values ORA-06512: at "OWB.SNAPSHOT", line 459

  • All my apps and music are gone after i updated to 5.0.1

    i did back up my ipodtouch before i updated it. all my musics and apps are gone and only thing it backed up is my pics, contacts, and some other stuff. is there anyway i can get my apps and musics back?

  • Linksys E2000 firmware bug for v1.0.02 build 4Jul. 9, 2010

    I run a website where I help people setup port forwarding in their router. So I do know how everything works. I own the e2000 and port forwarding has always worked. I updated the firmware and port forwarding works on the single port forwarding page a

  • Export folder structure in LR1.1

    Hi there, I downloaded LR1.1 this morning and hoped to see a long waiting and missing feature, but it seems that it is already lacking. My pictures are stored in folders like \2007\2007_05_02_Vacation and I want to export my pictures keeping the stru

  • Accidentally deleted Utilites Folder

    Hi, I have accidentally deleted my Utilities folder in 'Applications' thinking it was a folder for Adobe CC which I wanted uninstalled.  Trash Can has been emptied, also, and to pay for my sins I never had Time Machine turned on, never thought I'd ne