MB1C LSMW Recording fails with missing Prod Date for Batch Managed Material

An LSMW recording for transaction MB1C that was batch managed was successfully recorded.  This included the entry of the Production Date before pressing enter to post the Good Receipts.  However, when the transaction was executed, the error came back, "Enter the date of production".  In looking at the screen in the lsmw batch results, it shows the date in the production date field.
I also tried to create the recording in the Transaction Recorder (trancode SHDB), and process the recording simulating the background mode and got the same results.
Any ideas why it seems to be ignoring the production date?  Any help will be greatly appreciated.

An LSMW recording for transaction MB1C that was batch managed was successfully recorded.  This included the entry of the Production Date before pressing enter to post the Good Receipts.  However, when the transaction was executed, the error came back, "Enter the date of production".  In looking at the screen in the lsmw batch results, it shows the date in the production date field.
I also tried to create the recording in the Transaction Recorder (trancode SHDB), and process the recording simulating the background mode and got the same results.
Any ideas why it seems to be ignoring the production date?  Any help will be greatly appreciated.

Similar Messages

  • How to deal with FLV with missing meta data?

    I have and will get flvs in the future with missing meta
    data.
    I think for the most part the meta data that is missig is the
    file length.
    The exact files that are not working on our server are
    working on some other companies FMS.
    They must have some workaround actionscript on the server or
    configuration that is making it work.
    I'd like to know that workaround or that configuration
    change. That would be the ideal fix.
    I already know how to fix the FLV after I download it but I
    wanted to know if there is another way to deal with this problem.
    One were I can leave the FLVs alone.
    Is there some serverside actionscript that would tell FMS the
    length of the movie or change the configuration of fms to deal with
    this somehow.
    Any ideas?

    If you are using the same database and referencing two tables then you dont need special configuration for it. You use single jdbc adapter. In ESR you create two statment structure one for each table. This is one option. The second option is use join statment and write query and in this case one statment data structure.
    Please go through the help sap link for the jdbc document structure.
    http://help.sap.com/SAPHELP_NW04s/helpdata/EN/2e/96fd3f2d14e869e10000000a155106/content.htm
    The second option can be done using the below structure.
    <StatementName>
    <anyName action=u201D SQL_QUERYu201D >
    <access>SQL-String with optional placeholder(s)</access>
    <key>
      <placeholder1>value1</placeholder1>
      <placeholder2>value2<placeholder2>
    </key>
    </anyName > 
    </StatementName>

  • Please put a cost object with valid JV data for GL accoutnt (line 001)

    Dear All,
    When the user is trying to post ASKBN, They are getting an error saying "Please put a cost object with valid JV data for GL accoutnt (line 001)"
    So we tried to look up the error, and the WBS, Cost center, profit center and JVA details are all available in place. Still cant understand why this error comes. And why the system is asking to put JV details when they are available.
    On checking, WBS has an AUC and AUC has the cost center which has the JV details and PC details. So I think everything is in place.
    Can someone help please and let us know if we are missing anything else that needs to be checked ?
    Thanks,
    Hari Dharen

    Dear Hari Dharen,
    if something else in missung you'll get a error message if you run ASKBN in testmodus.
    regards Bernhard
    Edited by: Bernhard Kirchner on Oct 5, 2010 8:55 AM

  • Failed to retrieve long data for column "Contract Scope".

    Error: 0xC0202009 at Task 1 - Import P Data, Excel Source [1]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E21.
    Error: 0xC0208265 at Task 1 - Import P Data, Excel Source [1]: Failed to retrieve long data for column "Contract Scope".
    Error: 0xC020901C at Task 1 - Import P Data, Excel Source [1]: There was an error with output column "Contract Scope" (33) on output "Excel Source Output" (9). The column status returned was: "DBSTATUS_UNAVAILABLE".
    Error: 0xC0209029 at Task 1 - Import P Data, Excel Source [1]: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "Contract Scope" (33)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column
    "Contract Scope" (33)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
    Error: 0xC0047038 at Task 1 - Import P Data, SSIS.Pipeline: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Excel Source" (1) returned error code 0xC0209029. The component returned a failure code when the pipeline engine called
    PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure.
    Information: 0x40043008 at Task 1 - Import P Data, SSIS.Pipeline: Post Execute phase is beginning.
    Information: 0x4004300B at Task 1 - Import P Data, SSIS.Pipeline: "component "OLE DB Destination" (95)" wrote 0 rows.
    Information: 0x40043009 at Task 1 - Import P Data, SSIS.Pipeline: Cleanup phase is beginning.
    I cant seem to change the size of the this column in the EXCEL SOURCE .It just changes it back to
    Unicode string [DT_WSTR] and 255 characters . This field contains strings greater than 255 characters .

    I found a great solution to limitation of excel when reading cells with a lot of text and when firts rows has no data!
    Look here: http://dataintegrity.wordpress.com/2009/10/16/xlsx/
    aaaa

  • How to get XLR to show BPs with no transaction data for a given date range

    Hi -
    I am building an XLR report that does a comparison of net sales data across two periods for a given sales employee's BPs.
    The report has the row expansion:
    FACT BPA(*) SLP(SlpName = "ASalesPersonNameHere") ARDT(Code = "ARCreditMemo", "Invoice") Group by BPA.CardName
    and column expansions:
    FIG(SO_TaxDate = @StartDate:@EndDate)
    and
    FIG(SO_TaxDate = @StartDate2:@EndDate2)
    where @StartDate, @EndDate, @StartDate2, @EndDate2 are parameters that define the two ranges of dates.
    The column formulas are, from left to right:
    =ixDimGet("BPA", "CardName")
    =ixGet("SO_DocTotal")      <-- filtered by column expansion for first date range
    =ixGet("SO_DocTotal")      <-- filtered by column expansion for second date range
    The report works fine except for one problem, I would like it to include BPs for which no transaction occurred in either date range as well.
    Any help is greatly appreciated!
    Thanks,
    Lang Riley

    Really appreciate your feedback!  Those are good suggestions. I should have mentioned that I had already tried both those suggestions.
    Removing FACT on BPA in this case ends up returning all the BPs and not respecting the SLP(SlpName = "aName") part of the query. 
    Using **, i.e., * or #NULL, makes no change in the resulting data in this case.  I had thought that ** would be the solution, but it didn't change the outcome.  I still have BPs for which when their sales employee is used as the filter and they have no transactions for either date range, and yet they still do not appear. 
    I should further mention that the IXL query, as it now stands, does return BPs for which one of the periods has no data, just not both, and I have verified that applicable BPs with no transaction data for both periods do exist in my data set.  It seems that perhaps the IXL query needs to be restructured?  Please keep the suggestions coming including how this query might be restructured if necessary.

  • Data for travel management?

    i'm currently looking for the data for travel management (Financials Management & Controling --> Travel Management) in the DS server. i have been looking for it for  a few week and i still cant get the data. so does anyone have any idea on where the data is store????

    Hi Guys,
    I am accessing the data from PCL1 cluster too, but would like to access this within a method and while doing so it does not permit using the 'RP-IMP-CL-TE' macro or the import statement itself in its present format. Infact even the internal tables itself needs to be declared in the OO fashion without the header line.
    That being said, I tried doing an import in the following fashion.
    IMPORT gte_version TO ote_version
             statu    TO lt_statu
             beleg   TO lt_beleg
             exbel   TO lt_exbel
             abzug  TO lt_abzug
             ziel      TO lt_ziel
             zweck  TO lt_zweck
             konti    TO lt_konti
             vsch    TO lt_vsch
             kmver  TO lt_kmver
             paufa  TO  lt_paufa
             uebpa TO  lt_uebpa
             beler   TO lt_beler
             vpfps  TO lt_vpfps
             vpfpa  TO lt_vpfpa
             rot       TO lt_rot
             ruw     TO lt_ruw
             aend   TO lt_aend
             kostr   TO lt_kostr
             kostz  TO lt_kostz
             kostb  TO lt_kostb
             kostk  TO lt_kostk
             v0split TO lt_v0split
             editor  TO lt_editor
             user    TO lt_user
    FROM   DATABASE pcl1(te) ID
    gs_key ACCEPTING PADDING ACCEPTING TRUNCATION.
    where gte_version / ote_version are of type work area and remaining from STATU through USER are internal tables without a work area. Although this design checks for syntax successfully, creates an exception error (An exception with the type CX_SY_IMPORT_MISMATCH_ERROR occurred, but was neither handled locally, nor declared in a RAISING clause) during program execution.
    Can you guys through some light on what could be the problem here?
    Thanks and regards,
    Srikanth

  • Coherence integration with oracle weblogic portal for Session management

    Could you please let me know how to configure coherence integration with oracle weblogic portal for Session management. Its very urgent. please help.

    Please take a look at the following web page -
    http://coherence.oracle.com/display/COH35UG/Coherence*Web+Session+Management+Module
    -Luk

  • SSIS 2012 is intermittently failing with below "Invalid date format" while importing data from a source table into a Destination table with same exact schema.

    We migrated Packages from SSIS 2008 to 2012. The Package is working fine in all the environments except in one of our environment.
    SSIS 2012 is intermittently failing with below error while importing data from a source table into a Destination table with same exact schema.
    Error: 2014-01-28 15:52:05.19
       Code: 0x80004005
       Source: xxxxxxxx SSIS.Pipeline
       Description: Unspecified error
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC0202009
       Source: Process xxxxxx Load TableName [48]
       Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 11.0"  Hresult: 0x80004005  Description: "Invalid date format".
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC020901C
       Source: Process xxxxxxxx Load TableName [48]
       Description: There was an error with Load TableName.Inputs[OLE DB Destination Input].Columns[Updated] on Load TableName.Inputs[OLE DB Destination Input]. The column status returned was: "Conversion failed because the data value overflowed
    the specified type.".
    End Error
    But when we reorder the column in "Updated" in Destination table, the package is importing data successfully.
    This looks like bug to me, Any suggestion?

    Hi Mohideen,
    Based on my research, the issue might be related to one of the following factors:
    Memory pressure. Check there is a memory challenge when the issue occurs. In addition, if the package runs in 32-bit runtime on the specific server, use the 64-bit runtime instead.
    A known issue with SQL Native Client. As a workaround, use .NET data provider instead of SNAC.
    Hope this helps.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • 2lis_02_scl extract records with wrong posting date for Good Receipts.

    Hi Experts,
    We are currently having issue of mismatch between BW Schedule Line data with R/3 values for the Goods Receipts posting date updating incorrectly in to BW.
    Example.
    In table EKBE purchase order history we have following records.
    MANDT EBELN        EBELP   ZEKKN VGABE GJAHR BELNR      BUZEI BEWTP BWART BUDAT      MENGE
    501   5600453404   00010   00    1     2010  5012473031 0001  E     101   23.01.2010        1.250,000
    501   5600453404   00010   00    1     2010  5012473031 0002  E     101   23.01.2010        1.250,000
    501   5600453404   00010   00    1     2010  5012473031 0003  E     101   23.01.2010        1.250,000
    501   5600453404   00010   00    1     2010  5012693310 0001  E     101   26.02.2010        1.250,000
    Which means we have on posting date of 23.01.2010 1250*3 i.e. 3750 quantity of Goods Receipts.
    However when we check the extractor we get multiple records in internal table C_T_DATA and in psa.
    Line BWV ETENR   SLFDT              MENGE    ROCA BEDAT                            BUDAT          EBELN
           ORG                                                        NCEL
    1     001     0001     20100125     3750.000          20100113     ZNB     F     00000000     5600453404
    2     001     0002     20100226     1250.000          20100113     ZNB     F     00000000     5600453404
    3     002     0001     20100125     3750.000     X     20100113     ZNB     F     20100123     5600453404
    4     002     0001     20100125     3750.000          20100113     ZNB     F     20100123     5600453404
    5     003     0001     20100125     3750.000     X     20100113     ZNB     F     20100127     5600453404
    6     003     0001     20100125     3750.000          20100113     ZNB     F     20100127     5600453404
    7     002     0001     20100125     3750.000     X     20100113     ZNB     F     20100226     5600453404
    8     002     0001     20100125     3750.000          20100113     ZNB     F     20100226     5600453404
    9     002     0002     20100226     1250.000     X     20100113     ZNB     F     20100226     5600453404
    As can be seen we have record no 8 for ETENR (Schedule line 1) with posting date 26.02.2010 and another record with posting date 23.01.2010.
    Since we are getting 2 records the record with incorrect posting date overwrites the record with correct one.
    Any idea if this could be a standard extractor problem or any other way to resolve this issue.
    Any help would be appreciated.

    First of all, are you using a staging DSO? (You should ideally)
    If yes, is it a Write Optimized DSO? (Again, this is ideal)
    If its a standard DSO, the values maybe over-writing upon activation.
    You have 3 records (quantity = 1250 * 3) that have been receipted on 23.01.2010, where Posting Date = 23.01.2010.
    You also have a record (quantity = 1250 * 1) that has been receipted on 26.01.2010. Posting date = 26.01.2010.
    Now, in RSA3 & in PSA you can see more records than intended.
    This is because you have before & after images. (ROCANCEL = X or blank).
    ROCANCEL = X --> Before Image (record before change)
    ROCANCEL = blank --> After Image (record after change)
    This is a standard property of the extractor.
    Now, we also have something known as BWVORG (Process Keys). Each process, i.e. creating a PO, Goods Receipt, Invoice etc.. have different Process Keys.
    Creation - 001
    Goods Rcpt. - 002
    Invoice - 003
    We can see that record 8, i.e. BWVORG = 002 (GR) has been modified on 26.01.2010.
    That is why there is before image and after image.
    Which one should be the correct posting date? 23.01.2010?
    Normally in a Write optimized DSO, you will have all the records (before and after images & others as well).
    I hope this helps.
    Please let me know if otherwise.

  • Update record failed with "SPOC 003: Error in spool C call: Error from TemS

    Guys
    I have 4.7 production system with 4 dialog instance
    The logon groups are configured .
    For 1 of the application server ,when user does any PO or GR ,update fails with above message
    The spool is configured to stored on file . SM21 says read /write problem on /globle/100SPOOL
    But all is very good .sidadm has all read,write permission on this
    More it also shows "Operating system call gethostbyname failed "
    Please help me to solve this
    Thanks a ton in advance
    -Ganesh

    Hi Thomas
    Thanks
    I checked on this and all works fine
    Also we use NIS services to maintain all this
    Our Unix team says NIS ,host and all works very fine
    Please suggest as update stuck is causing many problems . Temporarily we have removed this perticular instance from logon group so that no user will log on to this
    Thanks in advance guys
    -Ganesh

  • Missing  master data for Account number Delta

    Hi,
    I am working on production system,Some records are missing master data(attributes street,city for Account number  for starting with Account number   9xxxxxx) for Delta load,Field lenguth is 10.these Missing  records are getting loaded with full load or full repair.but I want to load load with regular delta load.Every day I am getting records with other Account number start with 1 to 8.and load is fine. Problem with only Account number   9xxxxxx.
    Please help me whats the problem,where can i find related info.how to fix this issue.
    Is there any delta setup problem
    This generic data source ,pulling from table ADRC….numeric pointer.

    Hi ,
    Please check at source sytem side in RSA3 whether you are able to loacte the data with the mentioned account number............
    please post your reply to proceed further
    Cheers,
    Swapna.G

  • Transport fails with return code 12 for 0CHANOTASSIGNED Characteristics

    Dear Experts,
    I am transporting the 0CHANOTASSIGNED characteristics from BI Dev to BI QA.
    In BI Dev, I use the option Necessary Object.
    While importing request into BI QA, it fails with a return code 12. Some of the errors are:
    Start of the after-import method RS_IOBJ_AFTER_IMPORT for object type(s) IOBJ (Activation Mode)
    System error in lock management
    Error AD 025 during after import handling of objects with type(s) IOBJ
    Start of the after-import method RS_IOBC_AFTER_IMPORT for object type(s) IOBC (Activation Mode)
    Error when checking the InfoObjectCatalog 0CHANOTASSIGNED
    InfoObject 0ABCINDIC is not available in active version
    InfoObject 0ABCPRC_TYP is not available in active version
    InfoObject 0ABCPROCESS is not available in active version
    InfoObject 0ACCT_TYPE is not available in active version
    InfoObject 0ACTIONREAS is not available in active version
    This goes on for lot of infoobjects. If I were to goto RSA1 of BI QA and check any of the above objects, they are 'INACTIVE'.
    But they are 'ACTIVE' in BI Dev.
    Please help me in resolving this issue.
    Thanks and BR,
    Raj Jain

    Hi I.R.K,
    I recreated a trasnport for just one Infoobject (0ABCINDIC) which was one of the objects not active as shown in the logs of BI QA while transport.
    I did not make any changes or delete any data etc. I then just transported these request from BI Dev to BI QA.
    The transport was successful with return code 0. I then checked this object in RSD1 of BI QA and it was activated.
    Any ideas what may be the cause???
    Thanks,
    Raj Jain

  • Grid installtion failing with error  Cannot connect to Node Manager. : Conf

    I am trying to install Grid, however its failing with following error
    I installed Weblogic server 10.3.4 and 11g R2 database
    ==============================================================
    Code of Exception: Error occured while performing nmConnect : Cannot connect to Node Manager. : Configuration error while reading domain directory
    Use dumpStack() to view the full stacktrace
    Thu Mar 03 21:49:03 IST 2011
    <traceback object at 1>
    Connecting to Node Manager ...
    This Exception occurred at Thu Mar 03 21:49:03 IST 2011.
    Thu Mar 03 21:49:03 IST 2011
    SEVERE: Exception: during stop of admin server
    Thu Mar 03 21:49:03 IST 2011
    Name of Exception: main.WLSTException
    Thu Mar 03 21:49:03 IST 2011
    Code of Exception: Error occured while performing nmConnect : Cannot connect to Node Manager. : Configuration error while reading domain directory
    Use dumpStack() to view the full stacktrace
    Thu Mar 03 21:49:03 IST 2011
    <traceback object at 2>
    Connecting to Node Manager ...
    This Exception occurred at Thu Mar 03 21:49:03 IST 2011.
    Thu Mar 03 21:49:03 IST 2011
    Node Manager is not running
    Mar 3, 2011 9:49:03 PM oracle.sysman.omsca.util.CoreOMSConfigAssistantUtil execCommand
    INFO: error messages of the command :
    weblogic.nodemanager.NMException: Configuration error while reading domain directory
    at weblogic.nodemanager.client.NMServerClient.checkResponse(NMServerClient.java:301)
    at weblogic.nodemanager.client.NMServerClient.checkResponse(NMServerClient.java:314)
    at weblogic.nodemanager.client.NMServerClient.connect(NMServerClient.java:249)
    at weblogic.nodemanager.client.NMServerClient.checkConnected(NMServerClient.java:200)
    at weblogic.nodemanager.client.NMServerClient.checkConnected(NMServerClient.java:206)
    at weblogic.nodemanager.client.NMServerClient.getVersion(NMServerClient.java:53)
    at weblogic.management.scripting.NodeManagerService.verifyConnection(NodeManagerService.java:179)
    at weblogic.management.scripting.NodeManagerService.nmConnect(NodeManagerService.java:173)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.python.core.PyReflectedFunction.__call__(Unknown Source)
    at org.python.core.PyMethod.__call__(Unknown Source)
    at org.python.core.PyObject.__call__(Unknown Source)
    at org.python.core.PyObject.invoke(Unknown Source)
    at org.python.pycode._pyx2.nmConnect$3(<iostream>:118)
    at org.python.pycode._pyx2.call_function(<iostream>)
    at org.python.core.PyTableCode.call(Unknown Source)
    at org.python.core.PyTableCode.call(Unknown Source)
    at org.python.core.PyFunction.__call__(Unknown Source)
    at org.python.core.PyObject.__call__(Unknown Source)
    at org.python.pycode._pyx41.stopManagedServer$6(/u02/oracle/Oracle/Middleware/oms11g/sysman/omsca/scripts/wls/start_server.py:142)
    at org.python.pycode._pyx41.call_function(/u02/oracle/Oracle/Middleware/oms11g/sysman/omsca/scripts/wls/start_server.py)
    at org.python.core.PyTableCode.call(Unknown Source)

    I am still getting error, even after installing Weblogic server 10.3.2
    ar 4, 2011 10:44:42 AM oracle.sysman.omsca.util.CoreOMSConfigAssistantUtil execCommand
    INFO: error messages of the command :
    weblogic.nodemanager.NMConnectException: Connection refused. Could not connect to NodeManager. Check that it is running at node1.slksoft.com:7,403.
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
    at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
    at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
    at java.net.Socket.connect(Socket.java:525)
    at weblogic.nodemanager.client.SSLClient.createSocket(SSLClient.java:38)
    at weblogic.nodemanager.client.NMServerClient.connect(NMServerClient.java:227)
    at weblogic.nodemanager.client.NMServerClient.checkConnected(NMServerClient.java:199)
    at weblogic.nodemanager.client.NMServerClient.checkConnected(NMServerClient.java:205)
    at weblogic.nodemanager.client.NMServerClient.getVersion(NMServerClient.java:52)
    at weblogic.management.scripting.NodeManagerService.verifyConnection(NodeManagerService.java:175)
    at weblogic.management.scripting.NodeManagerService.nmConnect(NodeManagerService.java:168)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.python.core.PyReflectedFunction.__call__(Unknown Source)
    at org.python.core.PyMethod.__call__(Unknown Source)
    at org.python.core.PyObject.__call__(Unknown Source)
    at org.python.core.PyObject.invoke(Unknown Source)
    and also i am getting below error.
    oracle@node1 grid]$ export ORACLE_SID=prod
    [oracle@node1 grid]$ emctl status agent
    EM Configuration issue. /u01/oracle/product/11.2.0/db_1/node1.soft.com_prod not found.
    [oracle@node1 grid]$

  • Set-AzureServiceDiagnosticsExtension or publishing with diagnostitcs enabled fails with Azure SDK2.5 for existing Azure services

    Similar to https://social.technet.microsoft.com/Forums/systemcenter/en-US/487234f4-9748-4f49-ab7b-ce523da4c500/publish-cloud-service-fails-from-visual-studio-2013-update-4-published-asset-entry-for-image but since the given answer provides no solution
    and I found more details, I felt that opening this new question providing more details was necessary.
    I have an existing Azure service with two web roles (service and worker) published first in May 2012. Recently I tried to update from SDK2.2 to SDK2.5 and Visual Studio 2013 Update 2 to Update 4. The main reason behind this was to move from log4net to WAD
    and in doing so, of course directly move to the new diagnostics version. So before publishing I enabled WAD diagnostics logging in the properties of both roles.
    Trying to publish from Visual Studio to the exisiting Azure Service fails, VS output shows  the following lines:
    11:45:24 - Checking for Remote Desktop certificate...
    11:45:25 - Applying Diagnostics extension.
    11:45:45 - Published Asset Entry for Image Microsoft.Azure.Diagnostics_PaaSDiagnostics_europeall_manifest.xml not found.
    What's working 1: For testing purposes, I have created a new Azure service in the Azure portal.
    Publishing the same solution from the same development environment to this new service is possible without problems - the service with WAD diagnostics logging is working fine. Unfortunately this is no solution to my production service, with
    DNS alias and SSL certificates bound to the existing Azure service.
    What's working 2: Publishing the solution to the existing Azure service WITHOUT diagnositics enabled works.
    Problem with that: Trying to activate WAD diagnostics logging after publishing using the Azure cmdlets also fails WiTH a similar error message. Following http://blogs.msdn.com/b/kwill/archive/2014/12/02/windows-azure-diagnostics-upgrading-from-azure-sdk-2-4-to-azure-sdk-2-5.aspx I
    tried:
    PS C:\> Set-AzureServiceDiagnosticsExtension -StorageContext $storageContext -DiagnosticsConfigurationPath $public_config -ServiceName $service_name -Slot 'Staging' -Role $role_name
    VERBOSE: Setting PaaSDiagnostics configuration for MyWebRole.
    Set-AzureServiceDiagnosticsExtension : BadRequest : Published Asset Entry for Image
    Microsoft.Azure.Diagnostics_PaaSDiagnostics_europeall_manifest.xml not found.
    At line:1 char:1
    + Set-AzureServiceDiagnosticsExtension -StorageContext $storageContext -Diagnostic ...
    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : NotSpecified: (:) [Set-AzureServiceDiagnosticsExtension], CloudException
        + FullyQualifiedErrorId : Microsoft.WindowsAzure.CloudException,Microsoft.WindowsAzure.Commands.ServiceManagement.
       Extensions.SetAzureServiceDiagnosticsExtensionCommand
    The problem seems to be related to those service configuration parts in the cloud which are not replaced by a new deployment, so I compared both services using Azure cmdlet Get-AzureService.
    I found that the new service has properties which the old one is missing:
    ExtendedProperties      : {[ResourceGroup, myazureservice], [ResourceLocation, North Europe]}
    Is this a hint? Reinstalling or repairing Visual Studio is not the answer to this problem!!!
    What's the meaning of "Published Asset Entry for Image Microsoft.Azure.Diagnostics_PaaSDiagnostics_europeall_manifest.xml"?
    [Perhaps MS will publish a newer version of its Azure cmdlets, but that's not today's story]
    So what are possible reasons or fixes for this behaviour? Going back to log4net is not my favorite. Even worse, while there are alternative logging solution,
    I currently have no performance counter monitoring in the Azure portal (Remote desktop and perfmon is NO solution). Is there any alternative to going back to SDK2.4?
    Best regards,
     Andreas

    Hi Andreas,
    Thanks for your feedback.
    I will test and reproduce your issue on my side. Any information, I will post back for you.
    Thanks for your understanding.
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Update trigger fails with value too large for column error on timestamp

    Hello there,
    I've got a problem with several update triggers. I've several triggers monitoring a set of tables.
    Upon each update the updated data is compared with the current values in the table columns.
    If different values are detected the update timestamp is set with the current_timestamp. That
    way we have a timestamp that reflects real changes in relevant data. I attached an example for
    that kind of trigger below. The triggers on each monitored table only differ in the columns that
    are compared.
    CREATE OR REPLACE TRIGGER T_ava01_obj_cont
    BEFORE UPDATE on ava01_obj_cont
    FOR EACH ROW
    DECLARE
      v_changed  boolean := false;
    BEGIN
      IF NOT v_changed THEN
        v_changed := (:old.cr_adv_id IS NULL AND :new.cr_adv_id IS NOT NULL) OR
                     (:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NULL)OR
                     (:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NOT NULL AND :old.cr_adv_id != :new.cr_adv_id);
      END IF;
      IF NOT v_changed THEN
        v_changed := (:old.is_euzins_relevant IS NULL AND :new.is_euzins_relevant IS NOT NULL) OR
                     (:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NULL)OR
                     (:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NOT NULL AND :old.is_euzins_relevant != :new.is_euzins_relevant);
      END IF;
    [.. more values being compared ..]
        IF v_changed THEN
        :new.update_ts := current_timestamp;
      END IF;
    END T_ava01_obj_cont;Really relevant is the statement
    :new.update_ts := current_timestamp;So far so good. The problem is, it works the most of time. Only sometimes it fails with the following error:
    SQL state [72000]; error code [12899]; ORA-12899: value too large for column "LGT_CLASS_AVALOQ"."AVA01_OBJ_CONT"."UPDATE_TS"
    (actual: 28, maximum: 11)
    I can't see how the value systimestamp or current_timestamp (I tried both) should be too large for
    a column defined as TIMESTAMP(6). We've got tables where more updates occur then elsewhere.
    Thats where the most of the errors pop up. Other tables with fewer updates show errors only
    sporadicly or even never. I can't see a kind of error pattern. It's like that every 10.000th update
    or less failes.
    I was desperate enough to try some language dependend transformation like
    IF v_changed THEN
        l_update_date := systimestamp || '';
        select value into l_timestamp_format from nls_database_parameters where parameter = 'NLS_TIMESTAMP_TZ_FORMAT';
        :new.update_ts := to_timestamp_tz(l_update_date, l_timestamp_format);
    END IF;to be sure the format is right. It didn't change a thing.
    We are using Oracle Version 10.2.0.4.0 Production.
    Did anyone encounter that kind of behaviour and solve it? I'm now pretty certain that it has to
    be an oracle bug. What is the forum's opinion on that? Would you suggest to file a bug report?
    Thanks in advance for your help.
    Kind regards
    Jan

    Could you please edit your post and use formatting and tags.  This is pretty much unreadable and the forum boogered up some of your code.
    Instructions are here: http://forums.oracle.com/forums/help.jspa                                                                                                                                                                                                                                                                                                                                                                                                                                       

Maybe you are looking for