Exception Aggragation in Queries

Hi
I wanted to achieve the Balance type Keyfigure in the queries using the normal KF. So the idea was to use exception aggragation in the queries which is available in the Calculated KF Properties -> Enhance -> Exception aggragation as per the documentation.
But I am abel to just get After and before aggragation in the enhance option.Is there are special settings in the CKF to achieve Exception aggargatioon wrt a Char.
I did go through the white paper for Balance Sheet KF using the Cell editor but that doesn't satisfy my requirement.
Any help would be appriciated.
Best Regards,
Shiva

Hi Björn,
My requirement is to calculate Balance Amount in different currencies forexample in company code currency or Local currency.
I will not be doing any calcultion in the query but will only want it to act as a balance KF, the values are present in the cube in the respective currencies already.
I donot want to create new KF in the system with Exception aggregation but to use the available KF to act like Balance KF in queries using exception aggragation.
Thanks once again,
Shiva

Similar Messages

  • Sybase ESP java sdk, SQL parser exception for all queries

    Hi,
    I am new to Sybase ESP java sdk. Trying to use sdk projection subscription, but getting for all the queries including blank string:-
    com.sybase.esp.sdk.exception.ProjectErrorException: SQL parsing error
    at com.sybase.esp.sdk.impl.SubscriberImplV3.doSubscribe(Unknown Source)
    at com.sybase.esp.sdk.impl.SubscriberImpl.subscribeSql(Unknown Source)
    My Example code :-
    Credentials creds = new Credentials.Builder(Credentials.Type.USER_PASSWORD).setUser("user").setPassword("pwd").create();
      Project project = s_sdk.getProject(uri, creds);
      project.connect(WAIT_TIME_60000_MS);
      Subscriber subscriber = project.createSubscriber();
      //subscriber.subscribeStream("Trades");
      subscriber.connect();
                    subscriber.subscribeSql("select UserMaxCpu from wSBW912");
    I am new to sdk and not sure what I am doing wrong here, please suggest.
    Thanks,
    Venkatesh

    The problem you're experiencing is due to your JNDI configuration not matching the JavaDB database where you created the tables.
    If you look at the config for DerbyPool in the Admin Console, you'll see that DerbyPool points to:
    jdbc:derby://localhost:1527/sun-appserv-samples
    In the Admin Console click Resources->Connection Pools->DerbyPool, then click Additional Properties (9.1) or look at the page (9.0).
    You can either modify DerbyPool's config to point to:
    jdbc:derby://localhost:1527/BookDB
    Or you can create the tables in sun-appserv-samples. The create-tables ant task included with the Java EE 5 Tutorial will create the tables in the correct database.
    -ian

  • Exceptions using Lynq queries in a lightswitch button _execute method.

    I'm not sure if this is a LightSwitch or a ComponentOne FlexGrid issue so I'm reporting it on both sites.The following _Execute fails if I use the Dispatch with the Lynq query.
    private readonly List<Int32> DebitGPAccountRowindices = new List<int>();
    partial void SetAllDebitGPAccounts_Execute()
    Dispatchers.Main.BeginInvoke
    () =>
    //Modify the Column Values for collected Row Indices
    foreach (var t in DebitGPAccountRowindices)
    var MyAcct = from g in this.DataWorkspace.SPDCData.GPAccounts
    where ( g.ConceptID == ConceptId
    && g.ApplicationId == 1
    && g.GLDepartment == Department
    && g.GLAccount == TargetGLAccount)
    select g;
    var MyRowToUpdate = MyAcct.FirstOrDefault();
    if (MyRowToUpdate != null)
    _flex[t, "DebitGLAccountId"] = GLAccountId;
    If I use the dispatch I get a System.InvalidOperationException at the line -->var MyRowToUpdate = MyAcct.FirstOrDefault();
    {System.InvalidOperationException: It is not valid to execute the operation on the current thread.
       at Microsoft.LightSwitch.Threading.DispatcherExtensions.VerifyAccess(IDispatcher dispatcher)
       at Microsoft.LightSwitch.DataServiceQueryable.ScalarImpl[T](IDataServiceQueryable`1 source, Int32 takeCount, Func`2 scalarFunction)
       at Microsoft.LightSwitch.DataServiceQueryable.FirstOrDefault[TSource](IDataServiceQueryable`1 source)
       at LightSwitchApplication.DepartmentFeeFlexibleGrid.<SetAllDebitGPAccounts_Execute>b__1()
       at Microsoft.LightSwitch.Threading.ClientGenerated.Internal.MainDispatcher.<>c__DisplayClass2.<BeginInvoke>b__0()}
    If I remove the dispatch and just run it I get the following error at the line -->_flex[t, "DebitGLAccountId"] = GLAccountId;"
    {System.UnauthorizedAccessException: Invalid cross-thread access.
       at MS.Internal.XcpImports.CheckThread()
       at System.Windows.DependencyObject.GetValueInternal(DependencyProperty dp)
       at System.Windows.FrameworkElement.GetValueInternal(DependencyProperty dp)
       at System.Windows.DependencyObject.GetValue(DependencyProperty dp)
       at C1.Silverlight.FlexGrid.C1FlexGrid.get_ShowErrors()
       at C1.Silverlight.FlexGrid.Row.UpdateErrors()
       at C1.Silverlight.FlexGrid.Row.InvalidateCell(Column c)
       at C1.Silverlight.FlexGrid.Row.SetData(Column col, Object value)
       at C1.Silverlight.FlexGrid.Row.set_Item(Column col, Object value)
       at C1.Silverlight.FlexGrid.GridPanel.set_Item(Int32 row, Column col, Object value)
       at C1.Silverlight.FlexGrid.C1FlexGrid.set_Item(Int32 row, Column col, Object value)
       at C1.Silverlight.FlexGrid.C1FlexGrid.set_Item(Int32 row, String colName, Object value)
       at LightSwitchApplication.DepartmentFeeFlexibleGrid.SetAllDebitGPAccounts_Execute()
       at LightSwitchApplication.DepartmentFeeFlexibleGrid.DetailsClass.MethodSetProperties._SetAllDebitGPAccounts_InvokeMethod(DetailsClass d, ReadOnlyCollection`1 args)
       at Microsoft.LightSwitch.Details.Framework.Internal.BusinessMethodImplementation`2.<TryInvokeMethod>b__5()
       at Microsoft.LightSwitch.Utilities.Internal.UserCodeHelper.CallUserCode(Type sourceType, String methodName, String instance, String operation, ILoggingContext context, Action action, String additionalText, Func`1 getCompletedMessage, Boolean tryHandleException,
    Boolean swallowException, Exception& exception)}
    Does anyone have an idea what is happening here?
    Scott Mitchell

    Your database transaction should not span two buttons. The two buttons will be called on separate requests -- that is, separate calls to the server -- and you'll have a database connection dangling out there, potentially locking other people out of certain tables/rows until the user clicks the right button.
    Note also that there is a commit() method in the JDBC API.
    You might want to spend some time studying some example applications.

  • Function Sequence Error exception

    I get the following SQL exception when executing queries
    '[Microsoft][ODBC Driver Manager] Function sequence error'
    It doesn't always happen and it does not affect the results of the query.
    Could any one offer any advice for this exception?
    Thank you,
    xer

    >>
    in this case the database is MySQL so the
    problem is either the MySQL ODBC driver or the
    JdbcOdbcDriver and not the database.. becauseI
    don't know about you but I have never seen a MySQL
    database throw an error with a Mircrosoft errorcode
    and description.
    Yes and no.
    The MySQL ODBC driver throws that error, so does have
    something to do with MySQL. But as you say it also is
    specific to ODBC.
    thus we are able to eliminate the database as the
    source of this particular problem in this case.
    therefore in this case if we go directly from Javato
    the database using a type 4 driver we areeliminating
    the pieces of software that in this case in factare
    causing the problem.Not necessarily. The error represents a problem.
    Presumably it doesn't happen all the time, so
    something in the users code causes it. The cause of
    this might be due to a bug in the driver, or because
    something is wrong in the users code and this is how
    the driver ends up dealing with it. And using a
    different driver won't eliminate the second case it
    will just produce a different type of error.this is correct, however...
    to solve a problem such as this in a logical or scientific manner we should start by elminating all variable sources of the error that we can.
    using the second driver does eliminate as a possible source of the problem ODBC and the MySQL ODBC driver.
    so if now another error does in fact surface as the MySQL JDBC driver equivalent to the problem then we can be fairly certain that the error is located in their code. which puts us much farther along in the debugging process.

  • Queries running slow after migration

    Hi Guys,
    We have just migrated database server from SQL 2000 to 2008 (Enterprise) and  few queries are running slow than SQL 2000. We have performed all post migration process(Statistics update, Rebuilding indexes) and everything is working fine except 5-7 queries
    which are running surprisingly slow.
    We have checked the execution plan and there is difference in both plans. SQL 2008 is using different execution plan.
    Can you please suggest what are the other steps that can be performed to run them fast.
    SM

    1) What is memory assign to sql server after migration.
    2) DB compatibility
     to new one
    3) updates statics with full scan.
    4) check fragmentation and reorganize or rebuild index
    5) see fill factor should 70-90
    6) see missing index
    7) what is database size ?
    8) are using temp tables ? 
    9) traceon (1222,1204) will capture deadlock in errorlog
    10) are using mdf and ldf separate ?
    11) see any open transcation
    12) see memory and cpu load
    check and confirm.

  • Report in Designer fast, in Viewer extremely slow

    Hi.
    I have a report which connects to a SQL Server backend, calling 3 stored procs which deliver the data needed for the report. However, when I execute the report in the Designer (the web app uses CR 9, but I'm testing it with CR 2008 that came with VS 2008) it takes approx. 20 seconds to return with the data - yes, the query takes rather long...
    When I run our web application and call up the same report, using the same parameters and connected to the same database, the Viewer sits there for about 10 minutes before finally showing the report. I've been trying to determine the cause of this but have come up empty so far.
    The report itself is a fairly simple report: headers, a parameter overview (the report uses parameterized queries), the data, and no subtotals, no subreports, no formulas.
    Why is this taken so long using the Viewer? Apparently it can be fast(er) since the Designer comes within 20 secs WITH the correct data!
    I've tried a couple of things to see if I could determine the cause of the bad performance, but so far I've had no luck in improving performance whatsoever. The only thing left would be redesigning the underlying stored proc, but this is a rather complex stored proc and rewriting it would be no small task.
    Anybody has any idea on what to do next? Our customers are really annoyed by this (which I can understand) since they sometimes need to run this report a couple of times a day...

    Ludek Uher wrote:>
    >
    > Troubleshooting slow performance
    >
    > First thing to do with slow reports would be consulting the article u201COptimizing Reports for the Webu201D. The article can be downloaded from this location:
    >
    > https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/701c069c-271f-2b10-c780-dacbd90b2dd8
    >
    Interesting article. Unfortunately, trying several of the suggestions made, it didn't improve the report's performance. No noticeable difference in either Designer or Viewer.
    >
    > Next, determine where is the performance hit coming from? With Crystal Reports, there are at least four places in your code where slow downs may occur. These are:
    >
    > Report load
    > Connection to the data source
    > Setting of parameters
    > Actual report output, be it to a viewer, export or printer
    >
    This part is not relevant. Loading the report isn't the problem (first query being executed under 0.5 seconds after starting the report); as I'll explain further at the end of this reply.
    > A number of report design issues, report options and old runtimes may affect report performance. Possible report design issues include:
    >
    > u2022 OLE object inserted into a report is not where the report expects it to be. If this is the case, the report will attempt to locate the object, consuming potentially large amounts of time.
    The only OLE object is a picture with the company logo. It is visible in design time though, so I guess that means it is saved with the report?
    > u2022 The subreport option "Re-import when opening" is enabled (right click the subreport(s), choose format subreport, look at the subreport tab). This is a time consuming process and should be used judiciously.
    The report contains no subreports.
    > u2022 Specific printer is set for the report and the printer does not exist. Try the "No printer" option (File | Page setup). Also, see the following resources regarding printers and Crystal reports;
    Tried that. It was set to the Microsoft XPS Document writer, but checking the 'No printer' option only made a slight difference (roughly 0.4 seconds in Designer).
    > u2022 The number of subreports the report contains and in which section the subreports are located will impact report performance. Minimize the number of subreports used, or avoid using subreports if possible. Subreports are reports within a report, and if there is a subreport in a detail section, the subreport will run as many time as there are records, leading to long report processing times. Incorrect use of subreports is often the biggest factor why a report takes a long time to preview.
    As stated before, the report has no subreports.
    > u2022 Use of "Page N of M", or "TotalPageCount". When the special field "Page N of M" or "TotalPageCount" is used on a report, it will have to generate each page of the report before it displays the first page. This will cause the report to take more time to display the first page of the report
    The report DOES use the TotalPageCount and 'Page N of M' fields. But, since the report only consists of 3 pages, of which only 2 contain database-related (read further below) I think this would not be a problem.
    > u2022 Remove unused tables, unused formulas and unused running totals from the report. Even if these objects are not used in a report, the report engine will attempt to evaluate the objects, thus affecting performance.
    > u2022 Suppress unnecessary report sections. Even if a report section is not used, the report engine will attempt to evaluate the section, thus affecting performance.
    > u2022 If summaries are used in the report, use conditional formulas instead of running totals when ever possible.
    > u2022 Whenever possible, limit records through Record selection Formula, not suppression.
    > u2022 Use SQL expressions to convert fields to be used in record selection instead of using formula functions. For example, if you need to concatenate 2 fields together, instead of doing it in a formula, you can create a SQL Expression Field. It will concatenate the fields on the database server, instead of doing in Crystal Reports. SQL Expression Fields are added to the SELECT clause of the SQL Query send to the database.
    > u2022 Using one command table or Stored Procedure or a Table View as the datasource can be faster if you returns only the desired data set.
    > u2022 Perform grouping on the database server. This applies if you only need to return the summary to your report but not the details. It will be faster as less data will be returned to the reports.
    > u2022 Local client as well as server computer processor speed. Crystal Reports generates temp files in order to process the report. The temp files are used to further filter the data when necessary, as well as to group, sort, process formulas, and so on.
    All of the above points become moot if you know the structure of the report:
    3 pages, no subreports, 3 stored procs used, which each return a dataset.
    - Page 1 is just a summary of the parameters used for the report. This page also includes the TotalPageCount  field;
    - Page 2 uses 2 stored procs. The first one returns a dataset consisting of 1 row containing the headings for the columns of the data returned from stored proc 2. There will always be the same number of columns (only their heading will be different depending on the report), and the dataset is simply displayed as is.
    - The data from stored proc 2 is also displayed on Page 2. The stored proc returns a matrix, always the same number of columns, which is displayed as is. All calculations, groupings, etc. are done on the SQL Server;
    - Page 3 uses the third stored proc to display totals for the matrix from the previous page. This dataset too will always have the same number of columns, and all totaling is done on the database server. Just displaying the dataset as is.
    That's it. All heavy processing is done on the server.
    Because of the simplicity of the report I'm baffled as to why it would take so much more time when using the Viewer than from within the Designer.
    > Report options that may also affect report performance:
    >
    > u2022 u201CVerify on First Refreshu201D option (File | Report Options). This option forces the report to verify that no structural changes were made to the database. There may be instance when this is necessary, but once again, the option should be used only if really needed. Often, disabling this option will improve report performance significantly.
    > u2022 u201CVerify Stored Procedure on First Refreshu201D option (File | Report Options). Essentially the same function as above, however this option will only verify stored procedures.
    Hm. Both options WERE selected, and deselecting them caused the report to run approx. 10 seconds slower (from the Designer)...
    >
    >
    > If at all possible, use the latest runtime, be it with a custom application or the Crystal Reports Designer.
    >
    > u2022 The latest updates for the current versions of Crystal reports can be located on the SAP support download page:
    >
    > https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/bobj_download/main.htm
    >
    I've not done that (yet). Mainly because CR 10.5 came with VS2008, so it was easier to test to see if I can expect an improvement regarding my problem. Up till now, I see no improvement... ;-(
    > u2022 Crystal Report version incompatibility with Microsoft Visual Studio .NET. For details of which version of Crystal Reports is supported in which version of VS .NET, see the following wiki:
    >
    > https://wiki.sdn.sap.com/wiki/display/BOBJ/CrystalReportsassemblyversionsandVisualStudio+.NET
    >
    >
    According to that list I'm using a correct version with VS2008. I might consider upgrading it to CR 12, but I'm not sure what I would gain with that. Because I can't exactly determine the cause of the performance problems I can't tell whether upgrading would resolve the issue.
    > Performance hit is on database connection / data retrieval
    >
    > Database fine tuning, which may include the installation of the latest Service Packs for your database must be considered. Other factors affecting data retrieval:
    >
    > u2022 Network traffic
    > u2022 The number of records returned. If a SQL query returns a large number of records, it will take longer to format and display than if was returning a smaller data set. Ensure you only return the necessary data on the report, by creating a Record Selection Formula, or basing your report off a Stored Procedure, or a Command Object that only returns the desired data set.
    The amount of network traffic is extremely minimal. Two datasets (sp 1 and 3) return only 1 row containing 13 columns. The sp 2 returns the same number of columns, and (in this clients case) a dataset of only 22 rows, mainly numeric data!
    > u2022 The amount of time the database server takes to process the SQL query. Crystal Reports send the SQL query to the database, the database process it, and returns the data set to Crystal Reports.
    Ah. Here we get interesting details. I have been monitoring the queries fired using SQL Profiler and found that:
    - ALL queries are executed twice!
    - The 'data' query (sp 2) which takes the largest amount of time is even executed 3 times.
    For example, this is what SQL profiler shows (not the actual trace, but edited for clarity):
    Query                  Start time         Duration (ms)
    sp 1 (headers)      11:39:31.283     13
    sp 2 (data)            11:39:31.330     23953
    sp 3 (totals)          11:39.55.313     1313
    sp 1 (headers)      11:39:56.720     16
    sp 2 (data)            11:39:56.890     24156
    sp 3 (totals)          11:40:21.063     1266
    sp 2 (data)            11:40:22.487     24013
    Note that in this case I didn't trace the queries for the Viewer, but I have done just that last week. For sp2 the values run up to 9462 seconds!!!
    > u2022 Where is the Record Selection evaluated? Ensure your Record Selection Formula can be translated to SQL, so that the data can be filter down to the server. If a selection formula can not be translated into the correct SQL, the data filtering will be done on the local client computer which in most cases will be much slower. One way to check if a formula function is being translated into a SQL is to look at u201CShow SQL Queryu201D in the CR designer (Database -> Show SQL Query). Many Crystal Reports formula functions cannot be translated into SQL because there may not be a standard SQL for it. For example, control structure like IF THEN ELSE cannot be translated into SQL. It will always be evaluated on the client computer. For more information on IF THEN ELSE statements see note number 1214385 in the notes database:
    >
    > https://www.sdn.sap.com/irj/sdn/businessobjects-notes
    >
    Not applicable in this case I'm afraid. All the report does is fetch the datasets from the various stored procs and display them; no additional processing is taking place. Also, no records are selected as this is done using the parameters which are passed on to the stored procs.
    > u2022 Link tables on indexed fields whenever possible. While linking on non indexed fields is possible, it is not recommended.
    Although the stored procs might not be optimal, that is beside the point here. The point is that performance of a report when run from the Designer is acceptable (roughly 30 seconds for this report) but when viewing the same report from the Viewer the performance drops dramatically, into the range of 'becoming unusable'.
    The report has its dataconnection set at runtime, but it is set to the same values it had at design-time (hence the same DB server). I'm running this report connected to a stand-alone SQL Server which is a copy of the production server of my client, I'm the only user of that server, meaning there are no external disturbing factors I have to deal with. And still I'm experiencing the same problems my client has.
    I really need this problem solved. So far, I've not found a single thing to blame for the bad performance, except maybe that queries are executed multiple times by the CrystalReports engine. If it didn't do that, the time required to show the report would drop by approx. 60%.
    ...Charles...

  • Polling adapter fails unexpectedly

    Trying to test ESB polling service.Created sample polling service that check for database table changes. Service does not work properly, found error message in log.xml file:
    JCA inbound adapter service listener for "Fulfillment.TestPooler_RS.receive" with endpoint ID "[TestPooler_ptt::receive(CustomerCollection)]" has been requested to shutdown by Resource Adapter due to fatal error. Reason : ORABPEL-11624
    DBActivationSpec Polling Exception.
    Query name: [TestPooler], Descriptor name: [TestPooler.Customer]. Polling the database for events failed on this iteration.
    If the cause is something like a database being down successful polling will resume once conditions change. Caused by Exception [TOPLINK-6024] (Oracle TopLink - 10g Release 3 (10.1.3.1.0) (Build 061004)): oracle.toplink.exceptions.QueryException
    Exception Description: Modify queries require an object to modify.
    Query: UpdateObjectQuery(null).
    What can be cause of this error?

    Hi alex,
    Cause of problem
    Both OC4J running BPEL and the database adapter reached the maximum number of connections and were unable to establish new ones.
    The reason for the problem is a database adapter Bug 5595347.
    According to Bug 5595347, the database adapter might leave open connections behind.
    The fix for Bug 5595347 is included in Patch 5638122.
    Solution
    To implement the solution, please execute the following steps:
    Apply Patch 5638122.
    References
    Bug 5595347 - ORIONCMTCONNECTION NOT CLOSED IN ORABPEL~OC4J_BPEL~DEFAULT_ISLAND~1
    Note 314422.1 - Remote Diagnostic Agent (RDA) 4 - Getting Started
    Patch 5638122 - MERGE LABEL REQUEST ON TOP OF 10.1.2.0.2 FOR BUGS 5149866 5595347 5205630
    Cheers,
    Abhi...

  • Java.sql.SQLException: Cannot rollback a transactional connection

    Hi
    This is what i am trying to do:
    try{
    conn = getConnection();
    conn.setAutoCommit(false);
    execQuery(sql_1);
    execQuery(sql_2);
    conn.commit();
    }catch(Exception ex){
    conn.rollback();
    i have made an error in sql_2 so i get an Exception. My queries use the same "conn" so the rollback should applay to the first query but it has made a commit. How do i get rollback to work?
    I read this in an other topic:
    Make sure that you configure your database connection in the jboss.jcml and not in the bean (or servlet) itself.
    Then, make a lookup via jndi. On this way, the driver gets bound to the transactionManager and the rollback() works finde.
    But i don�t know how to configure the connections, and how does the trasactionManager work.
    Can some one help me?

    Maybe i should ad that i am using Oracle db an the
    code is in a Entity BeanThis changes the way you want to do rollbacks. Rollbacks in ejbs are handled by the container. In order to do rollbacks, you need to flag the transaction as to be rolled back. To do this, you need to call the method setRollbackOnly() in the ejb's SessionContext - sessionContext.setRollbackOnly();
    For this to work correctly, you must declare the method in the ejb's deployment descriptor as requiring to handle transactions by specifying the appropriate value for the <tran-attribute> element of you ejb's dd.
    BTW, you'd be better off posting EJB related question in the J2EE SDK forum.

  • Using PowerShell Remoting with Workflows and Functions

    Hello,
    I have an existing set of scripts with a mixture of Modules containing the functions, Workflows that call the functions and Main script that invokes the workflows.
    Step 1: The Main script picks up the list of servers from a custom db and invokes the relevant workflow for further operation
    Step 2: The workflow runs as foreach parallel and calls a function by passing the server name as a parameter
    Step 3. The function in the module does the real job
    All of this runs, from a central location. 90% of the server are geo dispersed. This seems to be a bit of a concern in terms of turnaround time is too high. 
    Hence, I would like to use WinRM (PowerShell Remoting). I have configured WinRM on all the Server with Firewall Exception.
    My Queries:
    1. Apart from using Enter-PSSession or Invoke-Command, are there any other CMDLet's?
    2. Say for example; I plan to use Enter-PSSession, should I embed the code with Workflow?
    3. How will it retrieve the output?
    Please advice, Thanks
    Rajiv

    You cannot use Enter-PSSession in a script.  
    It's only intended to be used interactively from the console. If you try to use it in a script, the script will run the subsequent commands on the local system, not in the remote session.
    You'll need to use Invoke-Command, and you'll have to include the function definitions in the invoked script block, since those functions will not exist in the remote sessions.
    [string](0..33|%{[char][int](46+("686552495351636652556262185355647068516270555358646562655775 0645570").substring(($_*2),2))})-replace " "

  • Query List

    Hi all ,
    Today we observed that all the queries are not displaying except 5-6 queries in SQ01 List..  what may the be problem.

    Hi,
    Please check forum subject. Here is for SAP Business One user only. Close your thread and post it on a proper one.
    Thanks,
    Gordon

  • DBAdapter polling for new or changed records not issuing callback?

    I have a process which includes a partner link with a database adapter.
    The database adapter is to poll a table for new records using an external database server.
    I have no relationships setup, it's a single-table query.
    The adapter is configured to delete rows after they are read.
    I have the receive process of that partner link tied to a receive activity and its own input variable assigned.
    My process compiles and deploys successfully. When I initiate the process through the BPEL console, the instance of the process waits at the receive activity and nothing happens, even after I manually add records to the table being polled. (which do get deleted, apparently by the partner link.)
    I have other dbadapter partner links in the same process using the same connection that work fine, specifically using procedure execution and custom sql queries.
    I can't find any errors or warnings about this, and the interactions manager in the BPEL Console allows me to manually send a message to the instance, allowing it to continue.
    The application server is 10.1.3.1.0 and I am using JDeveloper 10.1.3.2.0.4066
    Message was edited by:
    Paul.Dunbar
    Follow-up:
    When I set activation logging to debug, I get the following in domain.log
    <2007-04-03 08:15:28,591> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2007-04-03 08:15:28,591> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> SELECT SOME_ID, SOME_VALUE FROM PAULD.SOME_TABLE ORDER BY SOME_ID ASC
    <2007-04-03 08:15:28,592> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    From looking at this it may be that the callback to the receive activity is not occurring?
    Message was edited by:
    Paul.Dunbar
    Second follow-up:
    I'm still having this issue but must give up on troubleshooting.
    Documentation on polling adapter says nothing about handling callbacks, I've been trying to use correlations here with no luck. I'm also not seeing any documentation about setting up correlations for use with the database adapter.
    now the issue I'm having is that the inbound database adapter is still polling my table and deleting records, even though I've undeployed my process.

    I did not. Currently the manual recovery area is empty. I've tried re-deploying my process and am now getting the following in my logs (all debugging on):
    <2007-04-05 08:01:36,671> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::acquire> Acquired read lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,671> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::release> Released lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,672> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::acquire> Acquired read lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,672> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::release> Released lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,678> <DEBUG> <default.collaxa.cube.engine.data> <ConnectionFactory::getConnection> GOT CONNECTION 1 Autocommit = false
    <2007-04-05 08:01:36,681> <DEBUG> <default.collaxa.cube.engine.data> <ConnectionFactory::closeConnection> CLOSE CONNECTION 0
    <2007-04-05 08:01:36,681> <DEBUG> <default.collaxa.cube.engine.data> <ConnectionFactory::getConnection> GOT CONNECTION 1 Autocommit = false
    <2007-04-05 08:01:36,682> <DEBUG> <default.collaxa.cube.engine.data> <ConnectionFactory::closeConnection> CLOSE CONNECTION 0
    <2007-04-05 08:01:36,686> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::acquire> Acquired read lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,687> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::release> Released lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,688> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::acquire> Acquired read lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,688> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::release> Released lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,688> <DEBUG> <default.collaxa.cube.engine.deployment> <LockManager::acquire> Acquired write lock for SendEventNotice-1.0
    <2007-04-05 08:01:36,688> <DEBUG> <default.collaxa.cube.engine.data> <BaseProcessPersistenceAdaptor::updateMetaData> Updating process metadata [ domain = default, process = SendEventNotice, revision = 1.0, state = 0, lifecycle = 0 ]
    <2007-04-05 08:01:36,688> <DEBUG> <default.collaxa.cube.engine.data> <ConnectionFactory::getConnection> GOT TX CONNECTION 1 Autocommit = false
    <2007-04-05 08:01:36,691> <DEBUG> <default.collaxa.cube.engine.dispatch> <BaseDispatchSet::receive> Receiving message log process event message 61b55bd8c71ae1e3:-3d66bcb1:111c1b2d4aa:-7f9c for set system
    <2007-04-05 08:01:36,691> <DEBUG> <default.collaxa.cube.engine.dispatch> <Dispatcher::adjustThreadPool> Allocating 1 thread(s); pending threads: 1, active threads: 0, total: 84
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine.dispatch> <DispatcherBean::send> Sent message to queue
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine> <DomainObserverRegistry::notify> Notifying observer class oracle.bpel.services.rules.DeploymentListener with aspect class com.collaxa.cube.engine.observer.ProcessStateChangeAspect for domain default
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine> <DomainObserverRegistry::notify> Notifying observer class com.collaxa.cube.engine.test.driver.deployment.BPELTestDeployer with aspect class com.collaxa.cube.engine.observer.ProcessStateChangeAspect for domain default
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine> <DomainObserverRegistry::notify> Notifying observer class oracle.bpel.services.workflow.task.PurgeTask with aspect class com.collaxa.cube.engine.observer.ProcessStateChangeAspect for domain default
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.services> <oracle.bpel.services.workflow.task.PurgeTask::update(ICubeAspect> Called for aspect com.collaxa.cube.engine.observer.ProcessStateChangeAspect
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine> <DomainObserverRegistry::notify> Notifying observer class oracle.tip.esb.configuration.deployment.bpel.BPELSvcDeploymentManager with aspect class com.collaxa.cube.engine.observer.ProcessStateChangeAspect for domain default
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine.dispatch> <SystemDispatchSet::fetchScheduled> Fetched message log process event message 61b55bd8c71ae1e3:-3d66bcb1:111c1b2d4aa:-7f9c from system queue for processing
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine.dispatch> <LogProcessEventMessageHandler::handle> Processing log process event message 61b55bd8c71ae1e3:-3d66bcb1:111c1b2d4aa:-7f9c
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine> <DomainObserverRegistry::notify> Notifying observer class com.collaxa.cube.engine.data.CubeInstanceCache with aspect class com.collaxa.cube.engine.observer.ProcessStateChangeAspect for domain default
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine> <DomainObserverRegistry::notify> Notifying observer class oracle.bpel.services.workflow.DeploymentListener with aspect class com.collaxa.cube.engine.observer.ProcessStateChangeAspect for domain default
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.services> <oracle.bpel.services.workflow.DeploymentListener::update(ICubeAspect> Called for aspect com.collaxa.cube.engine.observer.ProcessStateChangeAspect
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.engine> <DomainObserverRegistry::notify> Notifying observer class com.collaxa.cube.engine.core.BaseCubeProcess$ActivationObserver with aspect class com.collaxa.cube.engine.observer.ProcessStateChangeAspect for domain default
    <2007-04-05 08:01:36,693> <INFO> <default.collaxa.cube.activation> <AdapterFramework::Inbound> JCAActivationAgent::onStateChanged State is changed for process 'bpel://localhost/default/SendEventNotice~1.0/', state=ON
    <2007-04-05 08:01:36,693> <INFO> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Adapter Framework instance: OraBPEL - endpointActivation for portType=initialCoordination_ptt, operation=receive
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Looking up jca:address...
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Looking up WSDL Service jca:address for portType=initialCoordination_ptt and operation=receive
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Locating Resource Adapter for jca:address.. {http://xmlns.oracle.com/pcbpel/wsdl/jca/}address: location='eis/DB/TST2' ManagedConnectionFactory='oracle.tip.adapter.db.DBManagedConnectionFactory' (properties: {Password=E466676A7A458FA6EBCFF72E13C04843, PlatformClassName=oracle.toplink.platform.database.oracle.Oracle10Platform, ConnectionString=jdbc:oracle:thin:@//lxdb2:1521/TST2, DriverClassName=oracle.jdbc.OracleDriver, UserName=SONRIS_DBA})
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Setting up Input Header Message QName: null
    <2007-04-05 08:01:36,693> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Setting up Input Header Part Name: null
    <2007-04-05 08:01:36,694> <DEBUG> <default.collaxa.cube.engine.data> <ConnectionFactory::getConnection> GOT TX CONNECTION 2 Autocommit = false
    <2007-04-05 08:01:36,694> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Setting Adapter Name: Database Adapter
    <2007-04-05 08:01:36,694> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Determining Input and Output WSDL Message elements
    <2007-04-05 08:01:36,695> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Creating ActivationSpec...
    <2007-04-05 08:01:36,695> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Instantiating ActivationSpec oracle.tip.adapter.db.DBActivationSpec
    <2007-04-05 08:01:36,695> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Populating ActivationSpec oracle.tip.adapter.db.DBActivationSpec with properties: {MaxRaiseSize=1, MarkReadFieldName=DONE_READING, UseBatchDestroy=false, MarkUnreadValue=UNREAD, ReturnSingleResultSet=false, QueryName=initialCoordination, MarkReadValue=READ, MarkReservedValue=RESERVED, DescriptorName=initialCoordination.AnotherTable, NumberOfThreads=1, MappingsMetaDataURL=initialCoordination_toplink_mappings.xml, PollingStrategyName=LogicalDeletePollingStrategy, PollingInterval=5, SequencingFieldName=AN_ID, MaxTransactionSize=unlimited}
    <2007-04-05 08:01:36,697> <DEBUG> <default.collaxa.cube.engine.data> <ConnectionFactory::closeConnection> CLOSE TX CONNECTION 1
    <2007-04-05 08:01:36,700> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Preparing ActivationSpec for Translation Service
    <2007-04-05 08:01:36,701> <DEBUG> <default.collaxa.cube.engine.dispatch> <BaseDispatchSet::acknowledge> Acknowledged message log process event message 61b55bd8c71ae1e3:-3d66bcb1:111c1b2d4aa:-7f9c
    <2007-04-05 08:01:36,705> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Preparing ActivationSpec for Connection Factory and Properties
    <2007-04-05 08:01:36,705> <DEBUG> <default.collaxa.cube.activation> <AdapterFramework::Inbound> Invoking endpointActivation() on Resource Adapter
    <2007-04-05 08:01:36,708> <INFO> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.DBResourceAdapter endpointActivation> Activating: oracle.tip.adapter.db.DBActivationSpec@4e7ed
    <2007-04-05 08:01:36,710> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2007-04-05 08:01:36,713> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client released
    <2007-04-05 08:01:36,715> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.DBEndpoint start> About to schedule the inbound polling thread with the work manager. ActivationSpec: oracle.tip.adapter.db.DBActivationSpec@4e7ed
    <2007-04-05 08:01:36,716> <INFO> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.DBEndpoint start> Kicked off 1 threads.
    <2007-04-05 08:01:36,717> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> client acquired
    <2007-04-05 08:01:36,719> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> begin transaction
    <2007-04-05 08:01:36,721> <DEBUG> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.TopLinkLogger log> rollback transaction
    <2007-04-05 08:01:36,722> <ERROR> <default.collaxa.cube.activation> <Database Adapter::Inbound> <oracle.tip.adapter.db.InboundWork handleException> Non retriable exception during polling of the database ORABPEL-11624
    DBActivationSpec Polling Exception.
    Query name: [initialCoordination], Descriptor name: [initialCoordination.AnotherTable]. Polling the database for events failed on this iteration.
    If the cause is something like a database being down successful polling will resume once conditions change. Caused by Exception [TOPLINK-6024] (Oracle TopLink - 10g Release 3 (10.1.3.1.0) (Build 061004)): oracle.toplink.exceptions.QueryException
    Exception Description: Modify queries require an object to modify.
    Query: UpdateObjectQuery(null).
    <2007-04-05 08:01:36,722> <FATAL> <default.collaxa.cube.activation> <AdapterFramework::Inbound> [initialCoordination_ptt::receive(AnotherTableCollection)]Resource Adapter requested Process shutdown!
    <2007-04-05 08:01:36,723> <FATAL> <default.collaxa.cube.activation> <AdapterFramework::Inbound>
    ORABPEL-11624
    DBActivationSpec Polling Exception.
    Query name: [initialCoordination], Descriptor name: [initialCoordination.AnotherTable]. Polling the database for events failed on this iteration.
    If the cause is something like a database being down successful polling will resume once conditions change. Caused by Exception [TOPLINK-6024] (Oracle TopLink - 10g Release 3 (10.1.3.1.0) (Build 061004)): oracle.toplink.exceptions.QueryException
    Exception Description: Modify queries require an object to modify.
    Query: UpdateObjectQuery(null).
    As reflected in the log messages, the process is automatically turned off. Turning it back on is ineffective, the server turns it back off with the same exception.
    This is a separate issue, but I'm not seeing much reason for this to be happening either. I've scrapped the process and started over, using my own BPEL process that simulates what I would expect that the dbadapter is supposed to do when it's configured to poll for records.

  • Result Row calculation

    Hi  result rows calculation is not happening properly in result rows as per the formula .
    Formual for Absolute Error =  Final forecast - Actuals
    Follwing is the report out put
    Material                    Period01
    10324401     Final Forecast        733,490.000 LB
         Actuals              555,583.67 LB
         Absolute Error       177,906.33
    10326795     Final Forecast        1,306,894.935 LB
         Actuals               1,406,158.27 LB
         Absolute Error         99,263.33
    10329435     Final Forecast         188,000.000 LB
         Actuals               192,000.00 LB
         Absolute Error        4,000.00
    10580325     Final Forecast        76,105.938 LB
         Actuals               215,999.82 LB
         Absolute Error        139,893.89
    10580427     Final Forecast        158,733.076 LB
         Actuals                0.00 LB
         Absolute Error         158,733.08
    Overall Result Final Forecast        2,463,223.949 LB
         Actuals                2,369,741.76 LB
         Absolute Error         579,796.62
    In the above report , Material wise calculation for Absolute error is correctly calculating ,  BUT when it comes to Over all Result  its just summing up all  materials Absolute error and showing .
    What i want in  over all result  is Absolute error should be calculated as overall Final Forecast - Over all actual ( I.e formula should be applied to over all result also ).
    I tried with exception aggragation and other stuff but nothing is working .
    Can any one Help me  to apply the formula for result rows also ...
    Helpfull inputs will be appriciated with points .
    thanks in advance

    Hi All  thanks for ur replies .
       I solved my issue  as  following  .
    Formual for Absolute Error = Final forecast - Actuals
    For Absolute error  formula  there are some ' exceptional aggregation ' and 'calculate result value as '  .
    I removed these ' exceptional aggregation ' and 'calculate result value as '  and refreshed the report  now  in result rows
    ' Absolute Error'  is calculating   as desired i.e by using result rows  ' final forecast' value & ' actuals' values in the formulae .
    Regards
    Navenas

  • Bug with DbXml::XmlResults::hasNext

    We are using 2.3.10 and get the following crash during xqueries. I also saw a discussion online discussion
    http://osdir.com/ml/db.dbxml.general/2005-08/msg00044.html
    I wanted to know that just using next() will be a sufficient work around and if the issues is a known issue for that version.
    #0 0x00002b632fab5a3d in *__GI_raise (sig=5340) at
    ../nptl/sysdeps/unix/sysv/linux/raise.c:67
    #1 0x00002b632fab6f1e in *__GI_abort () at ../sysdeps/generic/abort.c:88
    #2 0x00002b632f6b6ba8 in __gnu_cxx::__verbose_terminate_handler () from
    /usr/lib64/libstdc++.so.6
    #3 0x00002b632f6b4d86 in __cxa_call_unexpected () from
    /usr/lib64/libstdc++.so.6
    #4 0x00002b632f6b4db3 in std::terminate () from /usr/lib64/libstdc++.so.6
    #5 0x00002b632f6b4eb3 in __cxa_throw () from /usr/lib64/libstdc++.so.6
    #6 0x00002b632e3cbfaa in DbXml::LazyDIResults::hasNext (this=0x2aaaab598ee0)
    at Results.cpp:308
    #7 0x00002b632e3e2609 in DbXml::XmlResults::hasNext (this=0x14dc) at
    XmlResults.cpp:74

    Hello,
    It's a documentation issue then. Thanks. What you are seeing is that lazily evaluated results sets execute "on demand" which means that each successive result is evaluated in response to a next() or hasNext() call. The evaluation can result in exceptions because some queries result in runtime (vs parse time) evaluation issues that are only encountered in the middle of the actual execution.
    I've made a note to update the documentation to be more accurate.
    Regards,
    George

  • Table rows lock problem

    Dear Oracle Tech.
    As end user of Oracle Database, we seek advise from Oracle on the following issue :
    Our business application is an OLTP system and currently we have an adhoc batch process that needs to be executed. This batch process mirrors a normal online user process in the way some tables are updated on the database. Our concern arises when our customer requested to run the batch process during normal office hours which actually is not the norm. We would like to have your opinion on running batch update process during normal office hours, are there issues of lock contention and risk of deadlock? We understand Oracle databases have lock mechanisms built into it but is there a risk of the deadlock situation arising if we do execute the batch process during normal office hours? And if deadlock does occurs, how easy is it to detect and resolve the deadlock and return the application to normal operation?
    Our database is Oracle 7.3.4 running on Solaris 2.6 over a network, our online application is built with PowerBuilder 7 and running on Windows95 clients.
    Thank You.

    I have a related problem and thought that one of you might be able to help me. I am relatively new to JDBC.
    My code is very similar to Alex' except that the queries are simpler (what does 'FOR UPDATE' do?).
    con.setAutoCommit(false);
    con.setTransactionIsolation( Connection.TRANSACTION_REPEATABLE_READ);
    query1:
    select * from table_bla where a=b
    query2:
    update table_bla set ... where a=b
    The problem is that in between these 2 queries, the new rows get inserted into table_bla that satisfy a=b. Consequently, these rows get updated by query2, even though they haven't been retrieved by query1.
    How do I synchronize this, so that nothing can be inserted into table_bla until my block of queries is executed (or how can I ensure that only the stuff that was retrieved by query1 is updated in the table)?
    Help much appreciated.
    Thanks!
    Sladjana

  • 1Z0-051 advise

    Hi All ,
         I took 1Z0-051 last night after several months of studying and failed by 3 percentage points. I basically ran out of time. I knew how to answer most questions, but the queries were just too time consuming for me.  Any advise? i did not expect 2 hours of extreme concentration having most questions to be queries with several close answers.  I'm not even sure how else to study or what i did differently.  Is there a good strategy that I'm not seeing?   I basically ran out of time and had to mark the last 10 questions randomly.  I mean common Oracle , what is the wisdom here?

    Time management is always a factor in certification exams.  1Z0-051 isn't the worst (so far in my experience, that title goes to 1Z0-144).  However, because it is an unproctored exam, Oracle University had to ensure that the time limit was short enough that it would not be feasible for someone to look up answer to each question.  John's suggestion is reasonable, although I always recommend that when 'skipping' questions that you always put in your best guess, mark the question, and then move on to the next one.  That way if you don't have time to come back, there is at least a chance you got it correct.  I haven't read John's technique article, but here is one I created a few months ago that was published at GoCertify.  The last three tips are most relevant here.
    Oracle Certification: 10 Tips for any Exam | GoCertify
    However, from your note, you are indicating that most of the questions were taking you too much time.  That unfortunately is just a matter of familiarity with SQL.  Someone who can read SQL, but does not have much experience with it might take three minutes answering a given question while a person who has worked with Oracle SQL for years would complete it in one.  There is no cure for that except practicing writing queries until you are more comfortable with it.  You can also check out some of the study resources I have gathered at my site for that exam.    All of them are certification safe and most are free.  In particular, there are links to a number of videos on YouTube of Oracle SQL intro courses.
    Oracle Certification Prep: Exam details and preparation resources for 1Z0-051
    One suggestion that I can make when trying to pick the correct answer from a list of similar SQL statements is not to look for the right one initially but rather identify the obviously wrong ones.  Generally there will be at least one or two with an obvious flaw -- the wrong number or order of columns in the SELECT clause -- an invalid comparison in the WHERE clause, etc.  I'll scan through them and quickly eliminate these from consideration before turning to the remaining ones

Maybe you are looking for

  • Can I move my iCloud email address to a new Apple ID?

    Hi, My wife and I have just recently bought iPhones. I've had an iMac for years and an iPad for a couple of years now, so we now have the multiple users/devices issue, or conundrum if you like. I had an iTunes account since I've had my iMac so all pu

  • How do i use my songs on itunes as ringtones.

    i want a diffrent tone for diffrent people i used the rmaker app did all the steps and it only accepts one song and non of the other ones . undersound there is only one listed  and even under the contacts them selfs i cant find the songs i used  some

  • My iPod Nano 6th generation no longer shows up on iTunes

    how do I reconnect my Ipod Nano 6th generation to my iTunes?

  • How do i see what is using space on startup disc

    I am trying to see how much space is being used on my computer in the different areas; documents, photos, videos, etc.  To see what i can delete to free space.  Is there some way to check this?

  • Message do not exists in adapter engine

    I have a message being sent to IE via JMS Adapter . In Communication channel monitoring i could see the message Id & error message(Error in converting Binary message)  but with that message id i could not find any entry in Message monitoring - > Adat