Azure Diagnostics - WADLogs Table not created

I am trying to log the information whenever user perform some operations in a WebRole in cloud environment. In emulator it will write the info in output window, but in azue portal I cant see the info anywhere. I have enabled the azure diagnostics and provided
the azure storage credentials, but the azure WADLogs table not getting created. This how I write the log "Trace.TraceInformation("Policy started");". Also used TraceSource to write the info to the log,but no luck.
my diagnostics.wadcfgx file contents below
  <WadCfg>
      <DiagnosticMonitorConfiguration overallQuotaInMB="4096">
        <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Verbose" />
        <Directories scheduledTransferPeriod="PT1M">
          <IISLogs containerName="wad-iis-logfiles" />
          <FailedRequestLogs containerName="wad-failedrequestlogs" />
        </Directories>
        <PerformanceCounters scheduledTransferPeriod="PT1M">
          <PerformanceCounterConfiguration counterSpecifier="\Memory\Available MBytes" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\Web Service(_Total)\ISAPI Extension Requests/sec" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\Web Service(_Total)\Bytes Total/Sec" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\ASP.NET Applications(__Total__)\Requests/Sec" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\ASP.NET Applications(__Total__)\Errors Total/Sec" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\ASP.NET\Requests Queued" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\ASP.NET\Requests Rejected" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\Processor(_Total)\% Processor Time" sampleRate="PT3M" />
        </PerformanceCounters>
        <WindowsEventLog scheduledTransferPeriod="PT1M">
          <DataSource name="Application!*" />
        </WindowsEventLog>
        <CrashDumps dumpType="Full">
          <CrashDumpConfiguration processName="WaAppAgent.exe" />
          <CrashDumpConfiguration processName="WaIISHost.exe" />
          <CrashDumpConfiguration processName="WindowsAzureGuestAgent.exe" />
          <CrashDumpConfiguration processName="WaWorkerHost.exe" />
          <CrashDumpConfiguration processName="DiagnosticsAgent.exe" />
          <CrashDumpConfiguration processName="w3wp.exe" />
        </CrashDumps>
        <Logs scheduledTransferPeriod="PT3M" scheduledTransferLogLevelFilter="Verbose" />
      </DiagnosticMonitorConfiguration>
    </WadCfg>
    <StorageAccount>********</StorageAccount>
  </PublicConfig>
  <PrivateConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
    <StorageAccount name="*******" key="******" endpoint="" />
  </PrivateConfig>
  <IsEnabled>true</IsEnabled>
Note: Iam using Azure SDk2.5
Please kindly guide me to proceed further.

Alert me | Edit | Delete | Change type
Question
You cannot vote on your own post
0
I am trying to log the information whenever user perform some operations in a WebRole in cloud environment. In emulator it will write the info in output window, but in azue portal I cant see the info anywhere. I have enabled the azure diagnostics and provided
the azure storage credentials, but the azure WADLogs table not getting created. This how I write the log "Trace.TraceInformation("Policy started");". Also used TraceSource to write the info to the log,but no luck.
my diagnostics.wadcfgx file contents below
  <WadCfg>
      <DiagnosticMonitorConfiguration overallQuotaInMB="4096">
        <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Verbose" />
        <Directories scheduledTransferPeriod="PT1M">
          <IISLogs containerName="wad-iis-logfiles" />
          <FailedRequestLogs containerName="wad-failedrequestlogs" />
        </Directories>
        <PerformanceCounters scheduledTransferPeriod="PT1M">
          <PerformanceCounterConfiguration counterSpecifier="\Memory\Available MBytes" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\Web Service(_Total)\ISAPI Extension Requests/sec" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\Web Service(_Total)\Bytes Total/Sec" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\ASP.NET Applications(__Total__)\Requests/Sec" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\ASP.NET Applications(__Total__)\Errors Total/Sec" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\ASP.NET\Requests Queued" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\ASP.NET\Requests Rejected" sampleRate="PT3M" />
          <PerformanceCounterConfiguration counterSpecifier="\Processor(_Total)\% Processor Time" sampleRate="PT3M" />
        </PerformanceCounters>
        <WindowsEventLog scheduledTransferPeriod="PT1M">
          <DataSource name="Application!*" />
        </WindowsEventLog>
        <CrashDumps dumpType="Full">
          <CrashDumpConfiguration processName="WaAppAgent.exe" />
          <CrashDumpConfiguration processName="WaIISHost.exe" />
          <CrashDumpConfiguration processName="WindowsAzureGuestAgent.exe" />
          <CrashDumpConfiguration processName="WaWorkerHost.exe" />
          <CrashDumpConfiguration processName="DiagnosticsAgent.exe" />
          <CrashDumpConfiguration processName="w3wp.exe" />
        </CrashDumps>
        <Logs scheduledTransferPeriod="PT3M" scheduledTransferLogLevelFilter="Verbose" />
      </DiagnosticMonitorConfiguration>
    </WadCfg>
    <StorageAccount>********</StorageAccount>
  </PublicConfig>
  <PrivateConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
    <StorageAccount name="*******" key="******" endpoint="" />
  </PrivateConfig>
  <IsEnabled>true</IsEnabled>
Note: Iam using Azure SDk2.5
Please kindly guide me to proceed further.

Similar Messages

  • WADLogs table not being recreated

    Hi,
    I have an app that is using WADLogs table to store diagnostic trace messages from the app. Recently I have deleted my wadlogs table to clean up space. I have done it once before an azure recreated my wadlogs table, but not this time. Can anyone give me hints
    where to dig for solution?

    Hi,
    There was a similar thread
    http://social.msdn.microsoft.com/Forums/windowsazure/en-US/3ecb9412-e312-430c-b971-53c90e715423/wadlogstable-not-being-created?forum=windowsazuredevelopment 
    Here is a solution in that thread
     I did notice that Table monitoring on the Windows Azure dashboard was off. I set it to "minimal" and the table finally showed up.
    Hope this helps
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Oracle 11gR2 Partition tables not creating in default user tablespace

    Hi all:
    Not sure if i'm missing something or overlooked but when i create a partition table in a user schema, it is not creating in the schema's default tablespace instead creating with no assigned to any and using SYSTEM tablesspace.
    create user dgp identified by dgp default tablespace dgp temporary tablespace temp;
    grant connect, resource to dgp;select USERNAME,DEFAULT_TABLESPACE from dba_users where username ='DGP';
    USERNAME DEFAULT_TABLESPACE
    DGP DG
    select table_name, tablespace_name, partitioned from all_tables where owner='DGP';
    TABLE_NAME TABLESPACE_NAME PAR
    AUDITLOG_P2 DG NO
    AUDITLOG_P YES
    This is the partition script i used --i also gave the tablespace name:
    CREATE TABLE dgp.AUDITLOG_P(
    entry_time DATE,
    username VARCHAR2(14),
    groupname VARCHAR2(100),
    ip VARCHAR2(15),
    command VARCHAR2(15),
    directory VARCHAR2(300)
    PARTITION BY RANGE (entry_time)
    partition P_PAST VALUES LESS THAN (TO_DATE('2010-01-01','YYYY-MM-DD')),
    tablespace DG;
    ============
    What is it i'm missing? Anything different with Oracle 11gR2 on the partition creations?
    Thanks for your help..
    Regards,
    Ash

    Yes, i tried using the schema login adn creating the table as well as system with schemaname prefix....
    this is what i get from the below
    SQL> select def_tablespace_name from dba_part_tables where table_name ='AUDITLOG_P';
    DEF_TABLESPACE_NAME
    DG
    SQL> select partition_name, tablespace_name from dba_tab_partitions where table_name='AUDITLOG_P';
    PARTITION_NAME TABLESPACE_NAME
    P_PAST DG
    P_20100101 DG
    P_20100102 DG
    P_20100103 DG
    P_20100104 DG
    P_20100105 DG
    P_20100106 DG
    P_20100107 DG
    P_20100108 DG
    P_20100109 DG
    P_20100110 DG
    P_20100111 DG
    P_20100112 DG
    P_20100113 DG
    P_20100114 DG
    P_20100115 DG
    P_20100116 DG
    P_20100117 DG
    P_20100118 DG
    P_20100119 DG
    P_20100120 DG
    P_20100121 DG
    P_20100122 DG
    P_20100123 DG
    P_20100124 DG
    P_20100125 DG
    P_20100126 DG
    P_20100127 DG
    P_20100128 DG
    P_20100129 DG
    P_FUTURE DG
    31 rows selected.

  • Temporary Table Not Creating With Rows

    I'm running Oracle 9i on Windows XP, and I'm trying to create a temporary table to use in a larger query. The problem is that when I create it using ON COMMIT DELETE ROWS, after creation it has no rows. If I use ON COMMIT PRESERVE ROWS, then I can't delete it unless I log out and come back.
    Here's the query
    CREATE GLOBAL TEMPORARY TABLE tempaltid
    ON COMMIT PRESERVE ROWS as (select distributionid
    from distributions d
    where D.distributionid not in
    (select distributionid
    from distributionalternatives
    where ( distributionalternatives.alternativeid not in (11018,11019,11020,11021,11022,11023,11024,
    11025,11026,11475,11476,11477,11478,11479,
    11480,11481,11482,11483,11484,11485,11486,
    11487,11488,11489,11490,11491,11492,11493,
    11494,11495)))
    and D.distributiontypeid in
    (239,209)
    and D.distributionexpdt is null);
    I'm not committing after creation, so I don't see why the table would just be empty.

    Yes you are, and before too
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/sqlplsql.htm#sthref3520
    You don't use global temporary tables like that, you create them once and fill and empty them repeatedly.

  • XREF_DATA table not created

    Hi,
    I am trying to create cross-references, but the xref_data table does not exists in the database. I guess it should be created automatically with the installation. Any pointers what I should be doing?

    Unfortunately, it is not. You have to create it by yourself by running, %OH%/integration/esb/sql/oracle/xrftables.sql file.
    I usually create it in oraesb schema, not sure if it is supported in any other schema.
    HTH,
    Chintan
    http://chintanblog.blogspot.com

  • Tables not created during ERP ECC 5.0 installation

    hi guys,
    while installing SAP ERP ECC 5.0 on Oracle 9 i m getting following error message in SAPSSEXC.log file
    myCluster (3.2.Imp): 613: error when retrieving table description for physical table DOKCLU.
    myCluster (3.2.Imp): 614: return code received from nametab is 32
    myCluster (3.2.Imp): 296: error when retrieving physical nametab for table DOKCLU.
    (CNV) ERROR: code page conversion failed
                 rc = 2
    (DB) INFO: disconnected from DB
    D:\usr\sap\TX1\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    D:\usr\sap\TX1\SYS\exe\run/R3load.exe: END OF LOG: 20070305232652
    regards,
    nasir

    IDES? what does this mean?
    could you plzz help me figure out the probelm from the log that i posted in the thread. its urgent.
    thanks and regards,
    nasir

  • How to handle "The specified resource does not exist" exception while using entity group transactions to purge WADLogs table

    Hi,
    We have a requirement to purge the Azure WADLogs table on a periodic basis. We are achieving this by using Entity group transactions to delete the
    records older than 15 days. The logic is like this.
    bool recordDoesNotExistExceptionOccured = false;
    CloudTable wadLogsTable = tableClient.GetTableReference(WADLogsTableName);
    partitionKey = "0" + DateTime.UtcNow.AddDays(noOfDays).Ticks;
    TableQuery<WadLogsEntity> buildQuery = new TableQuery<WadLogsEntity>().Where(
    TableQuery.GenerateFilterCondition("PartitionKey",
    QueryComparisons.LessThanOrEqual, partitionKey));
    while (!recordDoesNotExistExceptionOccured)
    IEnumerable<WadLogsEntity> result = wadLogsTable.ExecuteQuery(buildQuery).Take(1000);
    //// Batch entity delete.
    if (result != null && result.Count() > 0)
    Dictionary<string, TableBatchOperation> batches = new Dictionary<string, TableBatchOperation>();
    foreach (var entity in result)
    TableOperation tableOperation = TableOperation.Delete(entity);
    if (!batches.ContainsKey(entity.PartitionKey))
    batches.Add(entity.PartitionKey, new TableBatchOperation());
    // A Batch Operation allows a maximum 100 entities in the batch which must share the same PartitionKey.
    if (batches[entity.PartitionKey].Count < 100)
    batches[entity.PartitionKey].Add(tableOperation);
    // Execute batches.
    foreach (var batch in batches.Values)
    try
    await wadLogsTable.ExecuteBatchAsync(batch);
    catch (Exception exception)
    // Log exception here.
    // Set flag.
    if (exception.Message.Contains(ResourceDoesNotExist))
    recordDoesNotExistExceptionOccured = true;
    break;
    else
    break;
    My questions are:
    Is this an efficient way to purge the WADLogs table? If not, what can make this better?
    Is this the correct way to handle the "Specified resource does not exist exception"? If not, how can I make this better?
    Would this logic fail in any particular case?
    How would this approach change if this code is in a worker which has multiple instances deployed?
    I have come up with this code by referencing the solution given
    here by Keith Murray.

    Hi Nikhil,
    Thanks for your posting!
    I tested your and Keith's code on my side, every thing worked fine. And when result is null or "result.count()<0", the While() loop is break. I found you code had some logic to handle the error "ResourceDoesNotExist" .
    It seems that the code worked fine. If you always occurred this error, I suggest you could debug your code and find which line of code throw the exception.   
    >> Is this an efficient way to purge the WADLogs table? If not, what can make this better?
    Base on my experience, we could use code (like the above logic code) and using the third party tool to delete the entities manually. In my opinion, I think the code is every efficient, it could be auto-run and save our workload.
     >>Is this the correct way to handle the "Specified resource does not exist exception"? If not, how can I make this better?
    In you code, you used the "recordDoesNotExistExceptionOccured " as a flag to check whether the entity is null. It is a good choice. I had tried to deleted the log table entities, but I used the flag to check the result number.
    For example, I planed the query result count is 100, if the number is lower than 100, I will set the flag as false, and break the while loop. 
    >>Would this logic fail in any particular case?
    I think it shouldn't fail. But if the result is "0", your while loop will always run. It will never stop. I think you could add "recordDoesNotExistExceptionOccured
    = true;" into your "else" block.
    >>How would this approach change if this code is in a worker which has multiple instances deployed?
    You don't change anything expect the "else" block. It would work fine on the worker role.
    If any question about this issue, please let me know free.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • .svclog file is not creating on cloud when cloud service is deployed into azure website.

    I have created a wcf cloud service which is being deployed on cloud through bitbucket repository.
    I want to create a .svclog file to trace logs on my azure local storage.
    For that, I have refered so many posts and finally configured my solution as below:
    ServiceConfiguration.Cloud.cscfg:
    <Role name="MyServiceWebRole">    <Instances count="1" />    <ConfigurationSettings>      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"                value="DefaultEndpointsProtocol=https;AccountName=StorageName;AccountKey=MyStorageKey" />    </ConfigurationSettings>    <Certificates>      <Certificate name="Certificate" thumbprint="certificatethumbprint" thumbprintAlgorithm="sha1" />    </Certificates>  </Role>
    ServiceConfiguration.Local.cscfg:
    <Role name="MyServiceWebRole">
        <Instances count="1" />    <ConfigurationSettings>      <!--Also tried with value = "UseDevelopmentStorage=true"-->      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString"               value="DefaultEndpointsProtocol=https;AccountName=StorageName;AccountKey=MyStorageKey" />    </ConfigurationSettings>    <Certificates>      <Certificate name="Certificate" thumbprint="certificatethumbprint" thumbprintAlgorithm="sha1" />    </Certificates>  </Role>
    ServiceDefinition.csdef:
    <WebRole name="MyServiceWebRole" vmsize="Small">    <Sites>      <Site name="Web">        <Bindings>          <Binding name="Endpoint1" endpointName="Endpoint1" />        </Bindings>      </Site>    </Sites>    <Endpoints>      <InputEndpoint name="Endpoint1" protocol="http" port="80" />    </Endpoints>    <Imports>      <Import moduleName="Diagnostics" />    </Imports>    <LocalResources>      <LocalStorage name="MyServiceWebRole.svclog" sizeInMB="1000" cleanOnRoleRecycle="false" />    </LocalResources>    <Certificates>      <Certificate name="Certificate" storeLocation="LocalMachine" storeName="My" />    </Certificates>  </WebRole>
    web.config (MyServiceWebRole project):
    <system.diagnostics>    <trace autoflush="false">      <listeners>        <add name="AzureDiagnostics"             type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics,              Version=2.2.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />      </listeners>    </trace>  </system.diagnostics>  ............<system.serviceModel>    <diagnostics>      <messageLogging maxMessagesToLog="3000"                      logEntireMessage="true"                      logMessagesAtServiceLevel="true"                      logMalformedMessages="true"                      logMessagesAtTransportLevel="true" />    </diagnostics>   ............ <runtime>    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">      <dependentAssembly>        <assemblyIdentity name="Microsoft.WindowsAzure.Diagnostics" publicKeyToken="31bf3856ad364e35" culture="neutral" />        <!--<bindingRedirect oldVersion="0.0.0.0-1.8.0.0" newVersion="2.2.0.0" />-->      </dependentAssembly>    </assemblyBinding>  </runtime>
    WebRole.cs (MyServiceWebRole project):
           public override bool OnStart()        {            //Trace.Listeners.Add(new DiagnosticMonitorTraceListener());            Trace.Listeners.Add(new AzureLocalStorageTraceListener());            Trace.AutoFlush = false;            Trace.TraceInformation("Information");            Trace.TraceError("Error");            Trace.TraceWarning("Warning");            TimeSpan tsOneMinute = TimeSpan.FromMinutes(1);            // To enable the AzureLocalStorageTraceListner, uncomment relevent section in the web.config            DiagnosticMonitorConfiguration diagnosticConfig = DiagnosticMonitor.GetDefaultInitialConfiguration();            // Transfer logs to storage every minute            diagnosticConfig.Logs.ScheduledTransferPeriod = tsOneMinute;            // Transfer verbose, critical, etc. logs            diagnosticConfig.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;            // Start up the diagnostic manager with the given configuration            DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagnosticConfig);            // For information on handling configuration changes            // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.            return base.OnStart();        }
    AzureLocalStorageTraceListener.cs (MyServiceWebRole project):
    public class AzureLocalStorageTraceListener : XmlWriterTraceListener    {        public AzureLocalStorageTraceListener() : base(Path.Combine(GetLogDirectory().Path, "MyServiceWebRole.svclog"))        {        }        public static DirectoryConfiguration GetLogDirectory()        {            try            {                DirectoryConfiguration directory = new DirectoryConfiguration();                // SHOULD I HAVE THIS CONTAINER ALREADY EXIST IN MY LOCAL STORAGE?                directory.Container = "wad-tracefiles";                directory.DirectoryQuotaInMB = 10;                directory.Path = RoleEnvironment.GetLocalResource("MyServiceWebRole.svclog").RootPath;                var val = RoleEnvironment.GetConfigurationSettingValue("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");                return directory;            }            catch (ConfigurationErrorsException ex)            {                throw ex;            }        }    }
    I also tried to comment out element in ServiceDefinition.csdef file. but here, I am having build time error (The XML specification is not valid).
    In my case, I am pushing all source code to bitbucket repository and from there it is deployed to the azure "WebSite". Here is more details:
    I need help to know:
    Why my service did not creating .svclog file from local to azure?
    It's also not doing the same even it has been deployed to azure?
    On which location(container) I can get the .svclog file into local storage?
    Please suggest correct way or modification so that I can overcome with this issue. Please replay fast.
    Thanks.

    Hello _Adian,
    Thanks for response.
    I uploaded all my code on bitbucket repository and configured a website on portal using "Integrate source control" (please refer:  http://azure.microsoft.com/en-in/documentation/articles/web-sites-publish-source-control/).
    (NOTE: This is the way my client is following.)
    Here is the structure of my solution:
    1. a wcf service application (.svc)
    2. few class library projects
    3. Azure cloud service (with Project 1 as web role).
    Now whenever I push my updated code to bitbucket, It automatically deployed to azure.
    So, please suggest me how can I create a separate .svclog file into local storage (using above environment).
    I hope this info will helpful to you for answer.

  • Database Initialiser does not create azure sql database

    I have a WPF application In the OnStartup in the app.cs I set the Database initializer and forced the context the initialise my database:
    Debug.WriteLine("Setting Initializer");
    Database.SetInitializer<MyContext>(new MyDatabaseInitializer());
    Debug.WriteLine("Declaring new context");
    using (MyContext c = new MyContext("MyContext"))
    Debug.WriteLine("Force the initialization");
    c.Database.Initialize(true);
    Debug.WriteLine("Done!");
    I created a sql database in the management portal of the azure.
    Copied the connectionstring it provided for ADO.net.
    But my database is not created.
    I also added a firewall rule but nothing happens. I Have no clue what to do.
    Can anybody please help me with this?
    If you need more information please ask i really have to get this sorted out.
    Thanks in advance!

    Hi Turkstra,
    I have tried to use EF to create Azure SQL database, it works as expect, the database 'jambordbcreate' appear in my SQL Azure, below is the detailed codes.
    using System;
    using System.Collections.Generic;
    using System.Data.Entity;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    namespace CodeFirst
    class Program
    static void Main(string[] args)
    Database.SetInitializer(
    new CreateDatabaseIfNotExists<SchContext>());
    using (var db = new SchContext("Server=tcp:****.database.windows.net,1433;Database=jambordbcreate;User ID=vote@***;Password=***;Trusted_Connection=False;Encrypt=True;Connection Timeout=30"))
    string name = "jambor";
    var student=new Student(){Name=name, ID="1a"};
    db.Students.Add(student);
    db.SaveChanges();
    db.Database.Initialize(true);
    public class Student
    public string ID { get; set; }
    public string Name { get; set; }
    public string age { get; set; }
    public string sex { get; set; }
    public class School
    public string ID { get; set; }
    public string Name { get; set; }
    public virtual List<Student> Students { get; set; }
    public class SchContext : DbContext
    public SchContext(string connection):base(connection)
    public DbSet<Student> Students { get; set; }
    public DbSet<School> Schools { get; set; }
    I suggest  you check your SQL connection, after run your code, please refresh azure portal to see whether your database is exist. Hope this give you some help.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Create UDF for table not in the List of tables

    Hi all,
    I know that my question maybe easy or been asked before, but I couldn't find the answer.
    To create a UDF in SAP B1 version 9.0 you should go Tools -> Customization Tools -> User-Defined Fields - Management...
    which is ok and working perfect. but my question is:
    If I want to create UDF for a table not in the list of tables there what should I do? I need to create 2 UDFs for table OMRC [Manufacturers], and can't find it in master data tables.
    anyone had this issue before?
    EDIT  : Is it good to add the field by using sql server? I know it's possible, but will it be visible in SAP
    thank you
    Message was edited by: Samira Haroun

    Hi Samira,
    There is noit a simple link for this, I advise you to study the documentation for TB1300 SBO Development Certification.
    Ypu should also have knowledge of .net, and C# or VB, because you have to make a small program/addon to add the fields
    Kind regards
    Ad Kerremans

  • Buffering table not up to date error message when creating a Cart

    Hi Folks,
    We are getting a 'Buffering table not up to date' error message when attempting to create a Cart. The error message only happens to the one end user ID only with the others not getting this error, therefore suggesting that my SRM org plan set-up is correct.
    Has anyone come across this previously and what checks are available in the system to resolve this? As mentioned, the attribute check is okay and I have also removed the user ID from the SRM org plan and reassigned again but this has not corrected the problem. We are on SRM 5.
    Thanks. Mike.
    Message:
    Buffering table not up to date
    Method: GET_STRUCTURE_PATHS_UP of program CL_BBP_ES_EMPLOYEE_MYS========CP
    Method: IF_BBP_ES_EMPLOYEE~GET_RL_UNIT_IDS of program CL_BBP_ES_EMPLOYEE_MYS========CP
    Method: IF_BBP_ES_PROFESSIONAL~GET_WORKPLACE_ADDRESS_IDS of program CL_BBP_ES_EMPLOYEE_MYS========CP
    Method: IF_BBP_ES_PROFESSIONAL~GET_WORKPLACE_ADDRESS_ID of program CL_BBP_ES_EMPLOYEE_MYS========CP
    Method: IF_BBP_ES_PROFESSIONAL~GET_WORKPLACE_ADDRESS of program CL_BBP_ES_EMPLOYEE_MYS========CP
    Form: USER_DETAIL_GET of program SAPLBBP_SC_APP
    Form: GLOBAL_FILL of program SAPLBBP_SC_APP
    Form: SC_INIT of program SAPLBBP_SC_APP
    Function: BBP_SC_APP_EVENT_DISPATCHER of program SAPLBBP_SC_APP
    Form: APP_EVENT_HANDLER of program SAPLBBP_SC_UI_ITS
    Edited by: Mike Pallister on Nov 5, 2008 11:44 AM

    Please advise on this problem.  When I try to check the Approval Overview tab for these two shopping carts.
    I got a dump.can anyone help me.
    Information on where terminated
        Termination occurred in the ABAP program "CL_BBP_ES_EMPLOYEE_MYS========CP" -
         in "IF_BBP_ES_EMPLOYEE~GET_RL_UNIT_IDS".
        The main program was "SAPMHTTP ".
        In the source code you have the termination point in line 35
        of the (Include) program "CL_BBP_ES_EMPLOYEE_MYS========CM008".
        The termination is caused because exception "CX_BBP_ES_INTERNAL_ERROR" occurred
         in
        procedure "/SAPSRM/IF_PDO_DO_APV_EXT~GET_AGENT_DETAILS" "(METHOD)", but it was
         neither handled locally nor declared
        in the RAISING clause of its signature.
        The procedure is in program "/SAPSRM/CL_PDO_DO_APV_EXT=====CP "; its source
         code begins in line
        1 of the (Include program "/SAPSRM/CL_PDO_DO_APV_EXT=====CM00E ".

  • Bad file is not created during the external table creation.

    Hello Experts,
    I have created a script for external table in Oracle 10g DB. Everything is working fine except it does not create the bad file, But it creates the log file. I Cann't figure out what is the issue. Because my shell scripts is failing and the entire program is failing. I am attaching the table creation script and the shell script where it is refering and the error. Kindly let me know if something is missing. Thanks in advance
    Table Creation Scripts:_-------------------------------
    create table RGIS_TCA_DATA_EXT
    guid VARCHAR2(250),
    badge VARCHAR2(250),
    scheduled_store_id VARCHAR2(250),
    parent_event_id VARCHAR2(250),
    event_id VARCHAR2(250),
    organization_number VARCHAR2(250),
    customer_number VARCHAR2(250),
    store_number VARCHAR2(250),
    inventory_date VARCHAR2(250),
    full_name VARCHAR2(250),
    punch_type VARCHAR2(250),
    punch_start_date_time VARCHAR2(250),
    punch_end_date_time VARCHAR2(250),
    event_meet_site_id VARCHAR2(250),
    vehicle_number VARCHAR2(250),
    vehicle_description VARCHAR2(250),
    vehicle_type VARCHAR2(250),
    is_owner VARCHAR2(250),
    driver_passenger VARCHAR2(250),
    mileage VARCHAR2(250),
    adder_code VARCHAR2(250),
    bonus_qualifier_code VARCHAR2(250),
    store_accuracy VARCHAR2(250),
    store_length VARCHAR2(250),
    badge_input_type VARCHAR2(250),
    source VARCHAR2(250),
    created_by VARCHAR2(250),
    created_date_time VARCHAR2(250),
    updated_by VARCHAR2(250),
    updated_date_time VARCHAR2(250),
    approver_badge_id VARCHAR2(250),
    approver_name VARCHAR2(250),
    orig_guid VARCHAR2(250),
    edit_type VARCHAR2(250)
    organization external
    type ORACLE_LOADER
    default directory ETIME_LOAD_DIR
    access parameters
    RECORDS DELIMITED BY NEWLINE
    BADFILE ETIME_LOAD_DIR:'tstlms.bad'
    LOGFILE ETIME_LOAD_DIR:'tstlms.log'
    READSIZE 1048576
    FIELDS TERMINATED BY '|'
    MISSING FIELD VALUES ARE NULL(
    GUID
    ,BADGE
    ,SCHEDULED_STORE_ID
    ,PARENT_EVENT_ID
    ,EVENT_ID
    ,ORGANIZATION_NUMBER
    ,CUSTOMER_NUMBER
    ,STORE_NUMBER
    ,INVENTORY_DATE char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,FULL_NAME
    ,PUNCH_TYPE
    ,PUNCH_START_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,PUNCH_END_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,EVENT_MEET_SITE_ID
    ,VEHICLE_NUMBER
    ,VEHICLE_DESCRIPTION
    ,VEHICLE_TYPE
    ,IS_OWNER
    ,DRIVER_PASSENGER
    ,MILEAGE
    ,ADDER_CODE
    ,BONUS_QUALIFIER_CODE
    ,STORE_ACCURACY
    ,STORE_LENGTH
    ,BADGE_INPUT_TYPE
    ,SOURCE
    ,CREATED_BY
    ,CREATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,UPDATED_BY
    ,UPDATED_DATE_TIME char date_format date mask "YYYYMMDD HH24:MI:SS"
    ,APPROVER_BADGE_ID
    ,APPROVER_NAME
    ,ORIG_GUID
    ,EDIT_TYPE
    location (ETIME_LOAD_DIR:'tstlms.dat')
    reject limit UNLIMITED;
    _***Shell Script*:*----------------_*
    version=1.0
    umask 000
    DATE=`date +%Y%m%d%H%M%S`
    TIME=`date +"%H%M%S"`
    SOURCE=`hostname`
    fcp_login=`echo $1|awk '{print $3}'|sed 's/"//g'|awk -F= '{print $2}'`
    fcp_reqid=`echo $1|awk '{print $2}'|sed 's/"//g'|awk -F= '{print $2}'`
    TXT1_PATH=/home/ac1/oracle/in/tsdata
    TXT2_PATH=/home/ac2/oracle/in/tsdata
    ARCH1_PATH=/home/ac1/oracle/in/tsdata
    ARCH2_PATH=/home/ac2/oracle/in/tsdata
    DEST_PATH=/home/custom/sched/in
    PROGLOG=/home/custom/sched/logs/rgis_tca_to_tlms_create.sh.log
    PROGNAME=`basename $0`
    PROGPATH=/home/custom/sched/scripts
    cd $TXT2_PATH
    FILELIST2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
    NO_OF_FILES2="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
    $DEST_PATH/tstlmsedits.dat for i in $FILELIST2
    do
    cat $i >> $DEST_PATH/tstlmsedits.dat
    printf "\n" >> $DEST_PATH/tstlmsedits.dat
    mv $i $i.$DATE
    #mv $i $TXT2_PATH/test/.
    mv $i.$DATE $TXT2_PATH/test/.
    done
    if test $NO_OF_FILES2 -eq 0
    then
    echo " no tstlmsedits.dat file exists " >> $PROGLOG
    else
    echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
    echo "-------------------------------------------" >> $PROGLOG
    fi
    NO_OF_FILES1="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
    FILELIST1="`ls -lrt tstlms*.dat |awk '{print $9}'`"
    $DEST_PATH/tstlms.datfor i in $FILELIST1
    do
    cat $i >> $DEST_PATH/tstlms.dat
    printf "\n" >> $DEST_PATH/tstlms.dat
    mv $i $i.$DATE
    # mv $i $TXT2_PATH/test/.
    mv $i.$DATE $TXT2_PATH/test/.
    done
    if test $NO_OF_FILES1 -eq 0
    then
    echo " no tstlms.dat file exists " >> $PROGLOG
    else
    echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
    fi
    cd $TXT1_PATH
    FILELIST3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'`"
    NO_OF_FILES3="`ls -lrt tstlmsedits*.dat |awk '{print $9}'|wc -l`"
    $DEST_PATH/tstlmsedits.datfor i in $FILELIST3
    do
    cat $i >> $DEST_PATH/tstlmsedits.dat
    printf "\n" >> $DEST_PATH/tstlmsedits.dat
    mv $i $i.$DATE
    #mv $i $TXT1_PATH/test/.
    mv $i.$DATE $TXT1_PATH/test/.
    done
    if test $NO_OF_FILES3 -eq 0
    then
    echo " no tstlmsedits.dat file exists " >> $PROGLOG
    else
    echo "created dat file tstlmsedits.dat at $DATE" >> $PROGLOG
    echo "-------------------------------------------" >> $PROGLOG
    fi
    NO_OF_FILES4="`ls -lrt tstlms*.dat |awk '{print $9}'|wc -l`"
    FILELIST4="`ls -lrt tstlms*.dat |awk '{print $9}'`"
    $DEST_PATH/tstlms.datfor i in $FILELIST4
    do
    cat $i >> $DEST_PATH/tstlms.dat
    printf "\n" >> $DEST_PATH/tstlms.dat
    mv $i $i.$DATE
    # mv $i $TXT1_PATH/test/.
    mv $i.$DATE $TXT1_PATH/test/.
    done
    if test $NO_OF_FILES4 -eq 0
    then
    echo " no tstlms.dat file exists " >> $PROGLOG
    else
    echo "created dat file tstlms.dat at $DATE" >> $PROGLOG
    fi
    #connecting to oracle to generate bad files
    sqlplus -s $fcp_login<<EOF
    select count(*) from rgis_tca_data_ext;
    select count(*) from rgis_tca_data_history_ext;
    exit;
    EOF
    #counting the records in files
    tot_rec_in_tstlms=`wc -l $DEST_PATH/tstlms.dat | awk ' { print $1 } '`
    tot_rec_in_tstlmsedits=`wc -l $DEST_PATH/tstlmsedits.dat | awk ' { print $1 } '`
    tot_rec_in_tstlms_bad=`wc -l $DEST_PATH/tstlms.bad | awk ' { print $1 } '`
    tot_rec_in_tstlmsedits_bad=`wc -l $DEST_PATH/tstlmsedits.bad | awk ' { print $1 } '`
    #updating log table
    echo "pl/sql block started"
    sqlplus -s $fcp_login<<EOF
    define tot_rec_in_tstlms     = '$tot_rec_in_tstlms';
    define tot_rec_in_tstlmsedits     = '$tot_rec_in_tstlmsedits';
    define tot_rec_in_tstlms_bad     = '$tot_rec_in_tstlms_bad';
    define tot_rec_in_tstlmsedits_bad='$tot_rec_in_tstlmsedits_bad';
    define fcp_reqid ='$fcp_reqid';
    declare
    l_tstlms_file_id number := null;
    l_tstlmsedits_file_id number := null;
    l_tot_rec_in_tstlms number := 0;
    l_tot_rec_in_tstlmsedits number := 0;
    l_tot_rec_in_tstlms_bad number := 0;
    l_tot_rec_in_tstlmsedits_bad number := 0;
    l_request_id fnd_concurrent_requests.request_id%type;
    l_start_date fnd_concurrent_requests.actual_start_date%type;
    l_end_date fnd_concurrent_requests.actual_completion_date%type;
    l_conc_prog_name fnd_concurrent_programs.concurrent_program_name%type;
    l_requested_by fnd_concurrent_requests.requested_by%type;
    l_requested_date fnd_concurrent_requests.request_date%type;
    begin
    --getting concurrent request details
    begin
    SELECT fcp.concurrent_program_name,
    fcr.request_id,
    fcr.actual_start_date,
    fcr.actual_completion_date,
    fcr.requested_by,
    fcr.request_date
    INTO l_conc_prog_name,
    l_request_id,
    l_start_date,
    l_end_date,
    l_requested_by,
    l_requested_date
    FROM fnd_concurrent_requests fcr, fnd_concurrent_programs fcp
    WHERE fcp.concurrent_program_id = fcr.concurrent_program_id
    AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
    exception
    when no_data_found then
    fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
    fnd_file.put_line(fnd_file.log, 'No data found for request_id');
    fnd_file.put_line(fnd_file.log, sqlerrm);
    raise_application_error(-20001,
    'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
    sqlerrm);
    when others then
    fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
    fnd_file.put_line(fnd_file.log,
    'Error occured when retrieving request_id request_id');
    fnd_file.put_line(fnd_file.log, sqlerrm);
    raise_application_error(-20001,
    'Error occured when executing RGIS_TCA_TO_TLMS_CREATE.sh ' ||
    sqlerrm);
    end;
    --calling ins_or_upd_tca_process_log to update log table for tstlms.dat file
    begin
    rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
                   (l_tstlms_file_id,
                   'tstlms.dat',
                   l_conc_prog_name,
                   l_request_id,
                   l_start_date,
                   l_end_date,
                   &tot_rec_in_tstlms,
                   &tot_rec_in_tstlms_bad,
                   null,
                   null,               
                   null,
                   null,
                   null,
                   null,
                   null,
                   l_requested_by,
                   l_requested_date,
                   null,
                   null,
                   null,
                   null,
                   null);
    exception
    when others then
    fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
    fnd_file.put_line(fnd_file.log,
    'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlms file');
    fnd_file.put_line(fnd_file.log, sqlerrm);
    end;
    --calling ins_or_upd_tca_process_log to update log table for tstlmsedits.dat file
    begin
    rgis_tca_to_tlms_process.ins_or_upd_tca_process_log
                   (l_tstlmsedits_file_id,
                   'tstlmsedits.dat',
                   l_conc_prog_name,
                   l_request_id,
                   l_start_date,
                   l_end_date,
                   &tot_rec_in_tstlmsedits,
                   &tot_rec_in_tstlmsedits_bad,
                   null,
                   null,               
                   null,
                   null,
                   null,
                   null,
                   null,
                   l_requested_by,
                   l_requested_date,
                   null,
                   null,
                   null,
                   null,
                   null);
    exception
    when others then
    fnd_file.put_line(fnd_file.log, 'Error:RGIS_TCA_TO_TLMS_CREATE.sh');
    fnd_file.put_line(fnd_file.log,
    'Error occured when executing rgis_tca_to_tlms_process.ins_or_upd_tca_process_log for tstlmsedits file');
    fnd_file.put_line(fnd_file.log, sqlerrm);
    end;
    end;
    exit;
    EOF
    echo "rgis_tca_to_tlms_process.sql started"
    sqlplus -s $fcp_login @$SCHED_TOP/sql/rgis_tca_to_tlms_process.sql $fcp_reqid
    exit;
    echo "rgis_tca_to_tlms_process.sql ended"
    _**Error:*----------------------------------*_
    RGIS Scheduling: Version : UNKNOWN
    Copyright (c) 1979, 1999, Oracle Corporation. All rights reserved.
    TCATLMS module: TCA To TLMS Import Process
    Current system time is 18-AUG-2011 06:13:27
    COUNT(*)
         16
    COUNT(*)
         25
    wc: cannot open /home/custom/sched/in/tstlms.bad
    wc: cannot open /home/custom/sched/in/tstlmsedits.bad
    pl/sql block started
    old 33:     AND fcr.request_id = &fcp_reqid; --fnd_global.conc_request_id();
    new 33:     AND fcr.request_id = 18661823; --fnd_global.conc_request_id();
    old 63:                &tot_rec_in_tstlms,
    new 63:                16,
    old 64:                &tot_rec_in_tstlms_bad,
    new 64:                ,
    old 97:                &tot_rec_in_tstlmsedits,
    new 97:                25,
    old 98:                &tot_rec_in_tstlmsedits_bad,
    new 98:                ,
    ERROR at line 64:
    ORA-06550: line 64, column 4:
    PLS-00103: Encountered the symbol "," when expecting one of the following:
    ( - + case mod new not null others <an identifier>
    <a double-quoted delimited-identifier> <a bind variable> avg
    count current exists max min prior sql stddev sum variance
    execute forall merge time timestamp interval date
    <a string literal with character set specification>
    <a number> <a single-quoted SQL string> pipe
    <an alternatively-quoted string literal with character set specification>
    <an alternatively-q
    ORA-06550: line 98, column 4:
    PLS-00103: Encountered the symbol "," when expecting one of the following:
    ( - + case mod new not null others <an identifier>
    <a double-quoted delimited-identifier> <a bind variable> avg
    count current exists max min prior sql st
    rgis_tca_to_tlms_process.sql started
    old 12: and concurrent_request_id = '&1';
    new 12: and concurrent_request_id = '18661823';
    old 18: and concurrent_request_id = '&1';
    new 18: and concurrent_request_id = '18661823';
    old 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,&1);
    new 22: rgis_tca_to_tlms_process.run_tca_data(l_tstlms_file_id,18661823);
    old 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,&1);
    new 33: rgis_tca_to_tlms_process.run_tca_data_history(l_tstlmsedits_file_id,18661823);
    old 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',&1);
    new 44: rgis_tca_to_tlms_process.send_tca_email('TCATLMS',18661823);
    declare
    ERROR at line 1:
    ORA-20001: Error occured when executing RGIS_TCA_TO_TLMS_PROCESS.sql ORA-01403:
    no data found
    ORA-06512: at line 59
    Executing request completion options...
    ------------- 1) PRINT   -------------
    Printing output file.
    Request ID : 18661823      
    Number of copies : 0      
    Printer : noprint
    Finished executing request completion options.
    Concurrent request completed successfully
    Current system time is 18-AUG-2011 06:13:29
    ---------------------------------------------------------------------------

    Hi,
    Check the status of the batch in SM35 transaction.
    if the batch is locked by mistake or any other error, now you can release it and aslo you can process again.
    To Release -Shift+F4.
    Also you can analyse the job status through F2 button.
    Bye

  • How to check table is creating or not

    Hi,
    I am creating only one table by running a table creation script(table.sql) which contain 366 partition.but I dont know whether table is creating or not.it is taking long time,sometimes I feel like script is hanging.even before running the script I have even spooled but spool file shows nothing. so how to check table is creating or not from background? can anyone please let me know abt this.this is bit urgent
    sql>spool table.log
    sql>@table_partition.sql

    hi,
    I am running the script table_partition which consist of 366 partition and 31 sub partition but the script is hanging.there is no hint in alert log file or anything wat might be the reason is it because of extent size? as extent size for this tablespace where table has to be create is 1mb,wat i suspect is do i need to set for higher value inorder to avoid this?
    with regards;
    Boo

  • SharePoint 2010 + RBS.msi does not create tables ("mssqlrbs") at the content DB

    Installed on W2K8 SP2 + SQL 2 K 8 R2 CTP November + SPS2010 beta.
    SQL instance by default MSSQLSERVER and WSS_Content default content database.
    Step by step following installation and use of remote Blob Storage capacity documentation:
    http://technet.microsoft.com/en-us/library/ee748631(office.14).aspx
    The RBS.msi, at the log file component installation it seems that installation is correct, but displays the error:
    ... Executing op: ActionStart(Name=FixFilestreamStoreConfig,,)
    Información 2769. El instalador ha encontrado un error inesperado. El código de error es 2769. La acción personalizada CreateFilesNoUI no ha cerrado 21 identificadores MSIHANDLE.
    ... Executing op: CustomActionSchedule(Action=FixFilestreamStoreConfig, ActionType=3070, Source=BinaryData, Target=RepairProvider, CustomActionData=filestream;C:\Program Files\Microsoft SQL Remote Blob Storage10.50\Provider Libraries\Filestream Provider\FilestreamProviderConfiguration.xml)
    The content database have not created the necessary tables "mssqlrbs" and running the command in the SharePoint Shell:
    $ rbss.Installed () returns "False"
    I have not found a walkarround or anything let me to implement tables and simulate the installation that performs the .msi, provided in the Feature Pack for Microsoft ® SQL Server ® 2008 R2 November Community Technology Preview (CTP)
    Any help would be appreciated.
    Thx
    Eva

    Hi,
    I am getting same error. r.
    The
    error code is 2769. Custom Action CreateFilesNoUI did not close 21 MSIHANDLEs
    Can you please let me know How you  resolved this error ?
    Thanks in Advance...

  • Jbo.PCollException: JBO-28006: Could not create persistence table PS_TXN_se

    Hi everyone,
    Got the following exception:
    2005-11-08 13:50:54,514 ERROR enatis.error (MsgLogger.java:logError:161) [Error Ref# INT.1131450654514]- An unhandled runtime exce
    ption occured.
    oracle.jbo.PCollException: JBO-28006: Could not create persistence table PS_TXN_seq
    at oracle.jbo.PCollException.throwException(PCollException.java:39)
    at oracle.jbo.pcoll.OraclePersistManager.createTable(OraclePersistManager.java:893)
    at oracle.jbo.pcoll.OraclePersistManager.queryNextCollectionId(OraclePersistManager.java:1372)
    at oracle.jbo.pcoll.PCollManager.register(PCollManager.java:560)
    at oracle.jbo.pcoll.PCollection.<init>(PCollection.java:102)
    at oracle.jbo.pcoll.PCollManager.createCollection(PCollManager.java:460)
    at oracle.jbo.server.DBSerializer.setup(DBSerializer.java:153)
    at oracle.jbo.server.DBSerializer.passivateRootAM(DBSerializer.java:286)
    at oracle.jbo.server.DBSerializer.passivateRootAM(DBSerializer.java:267)
    at oracle.jbo.server.ApplicationModuleImpl.passivateStateInternal(ApplicationModuleImpl.java:5123)
    at oracle.jbo.server.ApplicationModuleImpl.passivateState(ApplicationModuleImpl.java:5001)
    at oracle.jbo.server.ApplicationModuleImpl.passivateStateForUndo(ApplicationModuleImpl.java:7429)
    Does anyone know whether there is a process that is supposed to cleanup this table? How is it managed?
    Thanks

    Just to wrap this up i will attach the last couple of postings on Metalink:
    09-NOV-05 07:29:03 GMT
    New info : BUKSVDL : Hi Kjeld,
    Im still on the passivateStateForUndo topic. This time with the PS_TXAN table.
    It looks like BC4J writes to this user table when passivating the AM state.
    Please see my questions in the OTN thread below.
    jbo.PCollException: JBO-28006: Could not create persistence table PS_TXN_se
    The latest entry:
    "The data sources are correct. The problem here were the priviledges after
    upgrading the db to 10g rel 2. Some of the implicit priviledges were removed in
    the latest version of the db.
    The question is still, who manages these tables. When/How are entries removed?
    We see this table, "PS_TXN", growing all the time. How do we prevent problems
    like this in the future. Should we include this table, and maybe others, in the
    maintanance scripts? "
    09-NOV-05 09:29:05 GMT
    New info : BUKSVDL : Hi Kjeld,
    The DBA that did the investigation is out of office today.
    What i can tell you is that:
    We use a data-source on the App serves that is defined by the DBA's. We only
    require the DS name. Apparently, in the past, when a user was created certain
    default priveledges were automatically granted. This doesn't happen anymore
    with the latest release of the DB. The DBA had to explicitly grant the
    priveledges.
    09-NOV-05 10:16:09 GMT
    ISSUE CLARIFICATION
    ====================
    After upgrading the database to Oracle Server 10.1.0.2 the ADF application
    returns following error:
    BC4J - ApplicationModuleImpl.passivateStateForUndo();
    oracle.jbo.PCollException: JBO-28006: Could not create persistence table
    PS_TXN_seq
    The error occurs as soon as passivation is done in the application.
    eos (end of section)
    ISSUE VERIFICATION
    ===================
    Verified the issue by error messages supplied by customer.
    eos (end of section)
    CAUSE DETERMINATION
    ====================
    The user connecting to the database from the ADF application does not have
    the required database grants to create a table. The upgrade did
    delete/remove some required privileges.
    eos (end of section)
    CAUSE JUSTIFICATION
    ====================
    If the database user does not have the privilege "CREATE ANY TABLE", then
    this user cannot create a database table. The tables PS_TXN and PS_TXN_seq
    are created during runtime if passivation is done for the first time. If
    the user does not have the necessary privileges the table cannot occur and
    the error JBO-28006 will occur.
    The upgrade of the database removed some necessary
    eos (end of section)
    STATUS
    ======
    @ WIP - Work In Progress
    09-NOV-05 10:16:56 GMT
    POTENTIAL SOLUTION(S)
    ======================
    Make sure the database user has the privileges "CREATE TABLE" and "CREATE
    SEQUENCE" to create objects such as tables and sequences.
    eos (end of section)
    POTENTIAL SOLUTION JUSTIFICATION(S)
    ====================================
    When the database user has the privileges "CREATE TABLE" and "CREATE
    SEQUENCE" it will be possible to create the BC4J tables PS_TXN and
    PS_TXN_seq on passivation.
    eos (end of section)
    SOLUTION / ACTION PLAN
    =======================
    To implement the solution, please execute the following steps:
    1. Connect as user SYS to the database.
    2. Grant at least following priviliges to the ADF application user:
    GRANT CREATE TABLE TO <user>
    GRANT CREATE SEQUENCE TO <user>
    REMARK: Replace <user> with the actual username that is used to connect
    from the adf application to the database.
    eos (end of section)

Maybe you are looking for

  • How to access a placed PDF inside a rectangle and move it around

    Hi, I have the following code which places a PDF inside a rectangle var f = new File("C:/pdf.pdf");    var doc = app.activeDocument;  var thepdf =doc.pages[0].rectangles[0].place(f, false);  doc.pages[0].rectangles[0].fit(FitOptions.FILL_PROPORTIONAL

  • Help with the ios 5 Update on my 4th Gen iPod.

    For whatever reason, everytime I ATTEMPT to download the latest update on my iPod, it takes a good 7 minutes to download and then... "You network connection has timed out" And I'm pretty darn confused. So, is there anyway that you can explain to me w

  • Billing CDR prefix Error Billing CDR prefix Error Billing CDR prefix Error

    Thank you for your help. i have this question, My AS5400 send CDR for my Billing System ... but the calls or CDR for incoming national calls (Mobile & Fix)to my network are coming without prefix for our local and national calls which is 00962, on oth

  • I cannot access the sharefolder in W2008R2 in sub-domain.

    We cannot access the network shareholder in W2008R2 DC of the sub-domain. Our scenario is as follows: The main-domain(AAA.com) has two DCs (W2008R2+W2003R2). The sub-domain(BBB.AAA.com) has two DCs(W2008R2+W2003R2). There is trust relation between AA

  • Find me in search engine

    hi all. Ive just completed my iWeb site wich i hope to use comercially for my photography. does anyone know how i can get found on google etc? cheers, s