ACS 4.2: Day-wise logging not reflecting

Hi All,
I am trying to capture acs accounting logs day-wise, but unable to do it. When we check under report and monitoring it shows single thread, which contain all the accounting logs.
Need help to modify the config to get the logs on daily basis.
Regards
Amar

The Radius logs are only recording Start and Stops, which I also tried the switch to capture with the aaa accounting commands. Nothing there.
the authentication is Machine auth, using a certificate in AD. The Radius server(ACS) is configured to use external authentication and I have it setup with group mappings. When I intentially misconfigure that to get the laptop with the valid cert to fail its credentials, I do see the failed attempts. which tells me that the logging is working, but only for failed attempts by valid computers and that's of almost not use to me. I need to see when anonymous users are attempting the same thing. I am using a spare laptop that is not configured on my domain and therefore does not have a machine certificate.
I have gone back and forth with the switch config and have used guest-vlans, and no guest-vlans. The switchport AuthSM State always stays at "Connecting" and the PortStatus is at "Unauthorized" Which is the defualt it starts in. When I am using guest-vlans the port eventually goes to connected but its in the Guest-vlan. That tells me the switch is going through the EAP messages and determines there is no valid auth. In-fact I even see EAP-Fail in the debug dot1x outputs. So why would that not be logged on the Radius Server. The laptop pops up messages saying it couldn't find a valid certificate and in the network connections the interface status shows failed authentication. So windows is failing authentication.

Similar Messages

  • DAY WISE INVENTORY REPORT

    Hi
    In which txn we can see the inventory report day wise.
    ( we know we can monthy wise inventory report in MC.5 )

    Hi Raj,
    i don't think we have Std report for day wise inventory (not sure)
    u have to go for Z-development
    anyway's just wait for experts replies
    Regards
    kumar

  • Stock Ledger Report in Day Wise not giving correct values for Opening Stock

    Dear Experts,
    I m working on Sock ledger report to give the day wise data.
    since yesterdays closing Stock will become opening stock of today,
    To get Opening Stock,
    I have restricted the stock key figure with 2 variables on calday        
                                  (DATE FROM var with <=(Lessthan or equal to) and offset -1
                                   DATE TO      var with <=(Lessthan or equal to) and offset -1)
    To get Closing Stock,
    I have restricted the Stock key figure with 2 variables on calday        
                                  (DATE FROM var with <=(Lessthan or equal to)
                                   DATE TO      var with <=(Lessthan or equal to) )
    But in the output Opening stock values are not coming correctly and for given range of dates,
    for last date, opening stock is showing as Zero.
    Could you please tell me how can I achieve the correct values for opening stock.
    Thanks in advance.

    Hi Arjun,
    Seems like you are making it more complicated. What is your selection screen criteria?
    Ideally you should only use the offset.
    You will have say Calday in rows and stock in Column
    ____________Opening Stock_____________Closing Stock
    01/06/2009___(Closing stock of 31/05/2009)_(Stock of 01/06/2009)
    02/06/2009___(Closing stock of 01/06/2009)_(Stock of 02/06/2009)
    03/06/2009___(Closing stock of 02/06/2009)_(Stock of 03/06/2009)
    So, from above scenario, create one RKFs and include Calday in it. Create a replacement path variable on calday and apply the offset as -1.
    So, your Opening Stock will be calculated by closign stock of previous day.
    - Danny

  • ACS 5.5 and disappearing logs

    Hello
    I'm having issues with logging on a Cisco ACS 5.5.0.46 cluster. Cluster has been upgraded from latest 5.3 ACS to 5.5.
    After upgrading to 5.5 logging was working fine. Monitoring and Reports had historical logs and was logging live/current authentications.
    A few weeks back there was an issue outlined in the post below:
    https://supportforums.cisco.com/thread/2264123?tstart=30
    logging on the log collector stopped working. After restarting the logging process in the cluster, logging on the log collector started working again and I restored the missing logs from backup.
    A few days ago the log collector stopped workng again - no logs at all (nothing live or historic. I restarted the log collector ACS VM and it started logging again but logs prior to the restart are missing.
    The ACS cluster is logging to syslog but I really need to have reliable logs on the ACS.
    I'm aware of a recent patch for 5.5 but the release notes don't seem to mention the above issue.
    Is it worth patching 5.5 or roll back to 5.4?
    Thanks
    andy

    Andy,
    Do you have log recovery option enabled under Monitoring and Report Viewer, select Monitoring Configuration > System Operations > Log Message Recovery.
    For more information, go through the below listed link
    http://www.cisco.com/en/US/docs/net_mgmt/cisco_secure_access_control_system/5.5/user/guide/viewer_sys_ops.html#wp108302
    ~BR
    Jatin Katyal
    **Do rate helpful posts**

  • 2LIS_02_ITM deleted line items are not reflected in BW

    Hi,
    Our current Data flow 2LIS_02_ITM-->DSO-->CUBE.
    I am analyzing a Invoice, When it is created 01.04.2014 it has got 10 Line Items and all the data is load to BW,Invoice with 10 Line items.
    In ECC I can now see the same Invoice only got 5 line items and from change log I can see 5 line items are deleted on 15.04.2014.These changes are not reflected in BW.In BW invoice data still shows 10 line items.
    If I perform setup table job and ECC RSA3 for that invoice I am getting only 5.
    I believe it’s something to do with 0RECORDMODE,Can you please let me know how to fix this in BW.
    Thanks

    Yes this is a known behavior in case of deltas.
    For Deleted line items Rocancel field will have an entry with R.
    To handle this you need to map ROCANCEL field in Technical group of transformation (Between Datasource to DSO) with 0Recordmode.
    Once you will do that then after activating the data in DSO it will nullify the records and deleted order or item will not appear.
    Regards,
    AL

  • Reg :Production order cost  report day wise.

    Dear Expert,
    1.We want a report for a particular Production order cost  day wise.
    The scenario is like this Production order is Released for 100 Qty.
    Today they confirmed only 50 Qty.
    Tomorrow they will confirm 50 qty.
    Now they want to see the cost for today confirmation and tomorrows confirmation.
    Reason being daily the Raw Material cost is changed and they want to track the variance.
    Is there any standard report we can achieve this or do we have go for development
    2. And also i need to know daily how many production orders have been released.
    Thank u in advance.

    A day is not a controlled cost object. You could write a report to look at costs gathered in a day (by reporting date), but I strongly suggest that if you need to analyze costs per day you switch to daily orders.
    You'll find then that all the standard processes work for you.

  • Archive Logs NOT APPLIED but transferred

    Hi Gurus,
    I have configured Primary & Standby databases in same Oracle Home. OS version is OEL 5. Database version is 10.2.0.1. I could get the archive logs in the standby site but they are not getting applied in the standby database. I don't have OLAP installed in my database version. Would this create this issue? However I attached my primary alert log details below for your reference:
    Thu Aug 30 23:55:37 2012
    Starting ORACLE instance (normal)
    Cannot determine all dependent dynamic libraries for /proc/self/exe
    Unable to find dynamic library libocr10.so in search paths
    RPATH = /ade/aime1_build2101/oracle/has/lib/:/ade/aime1_build2101/oracle/lib/:/ade/aime1_build2101/oracle/has/lib/:
    LD_LIBRARY_PATH is not set!
    The default library directories are /lib and /usr/lib
    Unable to find dynamic library libocrb10.so in search paths
    Unable to find dynamic library libocrutl10.so in search paths
    Unable to find dynamic library libocrutl10.so in search paths
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 2
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =18
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.1.0.
    System parameters with non-default values:
    processes = 150
    sga_target = 289406976
    control_files = /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control01.ctl, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control02.ctl, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/control03.ctl
    db_file_name_convert = /home/oracle/oracle/product/10.2.0/db_1/oradata/newstand, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim
    log_file_name_convert = /home/oracle/oracle/product/10.2.0/db_1/oradata/newstand, /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim, /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/NEWSTAND/onlinelog, /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area/NEWPRIM/onlinelog
    db_block_size = 8192
    compatible = 10.2.0.1.0
    log_archive_config = DG_CONFIG=(newprim,newstand)
    log_archive_dest_1 = LOCATION=/home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/arch/
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
    DB_UNIQUE_NAME=newprim
    log_archive_dest_2 = SERVICE=newstand LGWR ASYNC VALID_FOR=(online_logfiles,primary_role) DB_UNIQUE_NAME=newstand
    log_archive_dest_state_1 = enable
    log_archive_dest_state_2 = enable
    log_archive_max_processes= 30
    log_archive_format = %t_%s_%r.dbf
    fal_client = newprim
    fal_server = newstand
    db_file_multiblock_read_count= 16
    db_recovery_file_dest = /home/oracle/oracle/product/10.2.0/db_1/flash_recovery_area
    db_recovery_file_dest_size= 2147483648
    standby_file_management = AUTO
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=newprimXDB)
    job_queue_processes = 10
    background_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump
    user_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/udump
    core_dump_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/cdump
    audit_file_dest = /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/adump
    db_name = newprim
    db_unique_name = newprim
    open_cursors = 300
    pga_aggregate_target = 95420416
    PMON started with pid=2, OS id=28091
    PSP0 started with pid=3, OS id=28093
    MMAN started with pid=4, OS id=28095
    DBW0 started with pid=5, OS id=28097
    LGWR started with pid=6, OS id=28100
    CKPT started with pid=7, OS id=28102
    SMON started with pid=8, OS id=28104
    RECO started with pid=9, OS id=28106
    CJQ0 started with pid=10, OS id=28108
    MMON started with pid=11, OS id=28110
    MMNL started with pid=12, OS id=28112
    Thu Aug 30 23:55:38 2012
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    starting up 1 shared server(s) ...
    Thu Aug 30 23:55:38 2012
    ALTER DATABASE MOUNT
    Thu Aug 30 23:55:42 2012
    Setting recovery target incarnation to 2
    Thu Aug 30 23:55:43 2012
    Successful mount of redo thread 1, with mount id 1090395834
    Thu Aug 30 23:55:43 2012
    Database mounted in Exclusive Mode
    Completed: ALTER DATABASE MOUNT
    Thu Aug 30 23:55:43 2012
    ALTER DATABASE OPEN
    Thu Aug 30 23:55:43 2012
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=16, OS id=28122
    ARC1 started with pid=17, OS id=28124
    ARC2 started with pid=18, OS id=28126
    ARC3 started with pid=19, OS id=28128
    ARC4 started with pid=20, OS id=28133
    ARC5 started with pid=21, OS id=28135
    ARC6 started with pid=22, OS id=28137
    ARC7 started with pid=23, OS id=28139
    ARC8 started with pid=24, OS id=28141
    ARC9 started with pid=25, OS id=28143
    ARCa started with pid=26, OS id=28145
    ARCb started with pid=27, OS id=28147
    ARCc started with pid=28, OS id=28149
    ARCd started with pid=29, OS id=28151
    ARCe started with pid=30, OS id=28153
    ARCf started with pid=31, OS id=28155
    ARCg started with pid=32, OS id=28157
    ARCh started with pid=33, OS id=28159
    ARCi started with pid=34, OS id=28161
    ARCj started with pid=35, OS id=28163
    ARCk started with pid=36, OS id=28165
    ARCl started with pid=37, OS id=28167
    ARCm started with pid=38, OS id=28169
    ARCn started with pid=39, OS id=28171
    ARCo started with pid=40, OS id=28173
    ARCp started with pid=41, OS id=28175
    ARCq started with pid=42, OS id=28177
    ARCr started with pid=43, OS id=28179
    ARCs started with pid=44, OS id=28181
    Thu Aug 30 23:55:44 2012
    ARC0: Archival started
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    ARCt started with pid=45, OS id=28183
    LNS1 started with pid=46, OS id=28185
    Thu Aug 30 23:55:48 2012
    Thread 1 advanced to log sequence 68
    Thu Aug 30 23:55:48 2012
    ARCo: Becoming the 'no FAL' ARCH
    ARCo: Becoming the 'no SRL' ARCH
    Thu Aug 30 23:55:48 2012
    ARCp: Becoming the heartbeat ARCH
    Thu Aug 30 23:55:48 2012
    Thread 1 opened at log sequence 68
    Current log# 1 seq# 68 mem# 0: /home/oracle/oracle/product/10.2.0/db_1/oradata/newprim/redo01.log
    Successful open of redo thread 1
    Thu Aug 30 23:55:48 2012
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Thu Aug 30 23:55:48 2012
    SMON: enabling cache recovery
    Thu Aug 30 23:55:48 2012
    Successfully onlined Undo Tablespace 1.
    Thu Aug 30 23:55:48 2012
    SMON: enabling tx recovery
    Thu Aug 30 23:55:49 2012
    Database Characterset is WE8ISO8859P1
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=47, OS id=28205
    Thu Aug 30 23:55:49 2012
    Error 1034 received logging on to the standby
    Thu Aug 30 23:55:49 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc:
    ORA-01034: ORACLE not available
    FAL[server, ARC1]: Error 1034 creating remote archivelog file 'newstand'
    FAL[server, ARC1]: FAL archive failed, see trace file.
    Thu Aug 30 23:55:49 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Thu Aug 30 23:55:49 2012
    ORACLE Instance newprim - Archival Error. Archiver continuing.
    Thu Aug 30 23:55:49 2012
    db_recovery_file_dest_size of 2048 MB is 9.77% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Thu Aug 30 23:55:50 2012
    Errors in file /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/udump/newprim_ora_28120.trc:
    ORA-00604: error occurred at recursive SQL level 1
    ORA-12663: Services required by client not available on the server
    ORA-36961: Oracle OLAP is not available.
    ORA-06512: at "SYS.OLAPIHISTORYRETENTION", line 1
    ORA-06512: at line 15
    Thu Aug 30 23:55:50 2012
    Completed: ALTER DATABASE OPEN
    Thu Aug 30 23:56:33 2012
    FAL[server]: Fail to queue the whole FAL gap
    GAP - thread 1 sequence 1-33
    DBID 1090398314 branch 792689455
    Kindly, guide me please..
    -Vimal.

    CKPT: The trace file details are added below for your reference;
    /home/oracle/oracle/product/10.2.0/db_1/admin/newprim/bdump/newprim_arc1_28124.trc
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning and Data Mining options
    ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1
    System name:     Linux
    Node name:     localhost.localdomain
    Release:     2.6.18-8.el5PAE
    Version:     #1 SMP Tue Jun 5 23:39:57 EDT 2007
    Machine:     i686
    Instance name: newprim
    Redo thread mounted by this instance: 1
    Oracle process number: 17
    Unix process pid: 28124, image: [email protected] (ARC1)
    *** SERVICE NAME:() 2012-08-30 23:55:48.314
    *** SESSION ID:(155.1) 2012-08-30 23:55:48.314
    kcrrwkx: nothing to do (start)
    Redo shipping client performing standby login
    OCISessionBegin failed -1
    .. Detailed OCI error val is 1034 and errmsg is 'ORA-01034: ORACLE not available
    *** 2012-08-30 23:55:49.723 60679 kcrr.c
    Error 1034 received logging on to the standby
    Error 1034 connecting to destination LOG_ARCHIVE_DEST_2 standby host 'newstand'
    Error 1034 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'newstand'
    ORA-01034: ORACLE not available
    *** 2012-08-30 23:55:49.723 58941 kcrr.c
    kcrrfail: dest:2 err:1034 force:0 blast:1
    kcrrwkx: unknown error:1034
    ORA-16055: FAL request rejected
    ARCH: Connecting to console port...
    ARCH: Connecting to console port...
    kcrrwkx: nothing to do (end)
    *** 2012-08-31 00:00:43.417
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:05:43.348
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:10:43.280
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:15:43.217
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:20:43.160
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:25:43.092
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:30:43.031
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:35:42.961
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:40:42.890
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:45:42.820
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:50:42.755
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 00:55:42.686
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:00:42.631
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:05:42.565
    kcrrwkx: nothing to do (start)
    *** 2012-08-31 01:10:42.496
    kcrrwkx: nothing to do (start)
    Mahir: Yes I have my 4 standby redo logs!
    I created the standby manually without using RMAN.
    Hemant: if it asks for even first thread, then obviously it shows nothing is applied on Standby. By the way so it is not called a 'GAP', I think..!
    Thanks.

  • OBIEE 11g : query log not found

    Hi,
    I am not able to see the query log in 11g answers manage session throwing error query log not found.
    I am using obiee 11g. 11g admin client is installed in local machine and I upload the rpd through enterprise manager. But I can not able to open the rpd in online mode that's why cannot change the query log level=2 (as in obiee 10g) for seeing the query log in Answers. Usually after making changes in 11g rpd, I upload that in server via enterprise manager console.
    Can anyone please tell me what should be correct option to see the query log and how I can open the rpd in online mode and how I can set the query log level in obiee 11g????
    Please help.
    Thanks
    Titas

    Hi,
    Its known bug and it can be done by below methods,
    Method1:
    If you enabled loglevel for each users wise it may be override with below place also can you confirm both places.
    enabled Tools-->Options-->Repository-->
    System log level by default will be 0 just try to increase to 2 or 3 and save it.
    Method1:
    by each report wise enabling loglevel
    try putting the below syntax in prefix section of advanced tab.
    SET VARIABLE LOGLEVEL=2,DISABLE_CACHE_HIT=1;
    It should generate the log with database sql as well.
    Method 3:
    Create Session variable(LOGLEVEL) with initblock
    in your init block --> datasource place put it like below query
    select 3 from IW_POSITION
    Note:just point any existing physical table from u r RPD.
    Then try to save it and test it.
    Refer screen
    http://bidevata.wordpress.com/2012/03/03/no-log-found-error-in-obiee-11g/
    Thanks
    Deva
    Edited by: Devarasu on Oct 11, 2012 11:44 PM

  • Tree table is not reflecting the updated model data changes at the front end

    I have two tables ,
    1) Provider table(tree table)  2)member table
    I have implemented drag and drop functionality using jQuery UI on both tables.
    In my scenario when I drag a member from the member table and drop it on the Provider table and also when I delete an assigned member from the provider table I will update the data fetched from odata model and again I will call the method which binds the data to the provider table so that the table will reflect the changes.
    here is the code,
    on drop:
    $("#Provider tbody tr").droppable({
      drop: function(event){
           oController.AssignMember(oProviderId, oMemberId)
      }).disableSelection();
    Assign member function:   here am updating the model.
    AssignMember : function(oProviderId, oMemberId){
      var oModel = new sap.ui.model.odata.ODataModel("../../../services/provider.xsodata/", true);
      var oParameters = {};
      oParameters.PROVIDER_ID = oProviderId;
      oParameters.MEMBER_ID = oMemberId;
      oParameters.CREATED_ON = new Date();
      oModel.setHeaders({"content-type" : "application/json;charset=utf-8"});
      oModel.create( "/PROVIDERMEMBERS", oParameters, null, function() {
      var oController = sap.ui.controller("adsm.provider.member_assignment_view");
      oController.GetProviderData();
      },function(jqXHR) {
      var errorMessage = jqXHR.response.body;
      var jsondata = JSON.parse(errorMessage);
      sap.ui.commons.MessageBox.alert(jsondata.error.message.value);
    GetProviderData function: here i bind the data to the table
    GetProviderData: function(){
    var oModel = new sap.ui.model.odata.ODataModel("../../../services/provider.xsodata/", true);
    var Context = "/PROVIDERS?expand=ASSIGNEDMEMBERS&$select=NAME,ID,ASSIGNEDMEMBERS/NAME,ASSIGNEDMEMBERS/ID,ASSIGNEDMEMBERS/PROVIDER_ID";
      var oTable = sap.ui.getCore().byId("tblProviders");
      oModel.read(Context, null, null, true, onSuccess, onError);
      function onSuccess(oEventdata){
      var outputJson = {};
      var p = 0;
      var r = {};
      if (oEventdata) {
      r = oEventdata;
      try {
      if (oEventdata.d){
      r = oEventdata.d;
      } catch(e){
      //alert('oEventdata.d failed');
      try {
      if (oEventdata.d.results){
      r = oEventdata.d.results;
      } catch(e){
      //alert('oEventdata.d.results failed');
      try {
      if (oEventdata.results){
      r = oEventdata.results;
      } catch(e){
      //alert('oEventdata.results failed');
      $.each(r, function(i, j) {
      outputJson[p] = {};
      outputJson[p]["NAME"] = j.NAME;
      outputJson[p]["ID"] = j.ID;
      outputJson[p]["PROVIDER_ID"] = j.ID;
      outputJson[p]["DELETE"] = 0;
      var m = 0;
      if (j.ASSIGNEDMEMBERS.results.length > 0) {
      $.each(j.ASSIGNEDMEMBERS.results, function(a,b) {
      outputJson[p][m] = { NAME: b.NAME,
      ID : b.ID,
      PROVIDER_ID: b.PROVIDER_ID,
      DELETE: 1};
      m++;
      p++;
    var oModel = new sap.ui.model.json.JSONModel();
      oModel.setData(outputJson);
      oTable.setModel(oModel);
      function onError(oEvent){
      console.log("Error on Provider Members");
    oTable.bindRows({
      path:"/"
    Its working fine in chrome but in IE the model data gets updated but the table is not reflecting the changes at front end.Can anyone suggest me a possible solution to fix this?
    Please have a look at the attached screen shots.
    Best regards,
    Amala Suganya.

    Hi Amala,
    I think this will help you:
    Disabling Cache for CRUD/FI OData scenarios for a UI5 Application on Internet Explorer
    Kind regards,
    RW

  • Day wise balance report

    Hi,
    I WANT A REPORT FOR ALL VENDOR S DAY WISE BALANCE.
    CAN ANYONE GUIDE ME ON THIS.
    Sudhanshu

    Hope this will help you..
    ACCOUNTS PAYABLE
    (Note : Similar Reports available for A/R are available for A/P also)
    1. Vendor Balances                                                                      S_ALR_87012082
        i) Advance to staff register – Monthly (S_ALR_87012082)
           For Loan to Staff  - Enter Sp.GL Ind.2 and Tick Special GL Balance
        ii) For Balances in Employee Accounts – enter Reconciliation Account No.35300
       iii) For Cumulative & Non-Cumulative Fixed Deposits Choose Reconciliation  
           Number 31001 or 31002 as the case may be.
      1v) For Sundry Creditors Balances-  Enter Recon No.35001/35100
    2.     Vendor Debit/Credit Memo Register                                       S_ALR_87012287
         In SAP Credit Memo debits the Vendor A/c – Document Type is KG.
         There is no Debit Memo concept in SAP. For crediting the Vendor other than by
         regular invoice procedure, Document  Type KA is to be selected.
         Choose Transaction Code S_ALR_87012287 ( of Document Journal ) and enter
         the above mentioned Document Types under Dynamic Selections for running the
         registers)
    Satish
    Assign points if useful

  • Report - Day wise /Week wise

    Hi Experts,
    I have 2 reports for which the data is fetched from 1 DSO.
    I have CREATED DATE and DESPATCHED DATE in that.
    I want to view both the reports Day Wise / Week Wise.
    In the DSO i have mapped the Created Date to the CalDay and CalWeek/Year. Which will be used in the report 1
    And I want to do the same mapping for the Despatched Date too. In order see the Day/Week trend in the report 2.
    Since there is only one CalDay and CalWeek/Year objects available in the DSO im not able to bring the trend for despatched date.
    Can anyone give solution to fx this issue please.
    Thanks
    Edited by: MO AHMED on Jan 7, 2010 4:30 PM

    There are different solutions to this topic. The easiest is described by Roy:
    Create two InfoObjects DESP_DAY and DESP_WK, copy them from 0CALDAY and 0CALWEEK and populate them from DESPATCHED_DATE in the transformation. I think you might need a routine for DESP_WK, simply use the function module DATE_GET_WEEK there.
    If this doesn't work for you, e.g. if the DSO is used in a Multi Provider you might add an additional InfoObject DATE_TYPE to the DSO key. This is CHAR 1 and filled with either C for created date or D for dispatched date. Then you use two different field groups in the transformation to create two lines, one where 0CALDAY and 0CALWEEK are filled from CREATED_DATE with DATE_TYPE C and one where they are filled from DESPATCHED_DATE with date type D.
    Best regards
    Dirk

  • Message sent from RWB but not reflected in SXMB_MONI

    Hello Experts,
    I have a SOAP - RFC synchronous scenario. I am sending a message from RWB to PI system.It shows message sent in RWB and the message details are reflected in the RWB->message monitoring ->Adapter Engine as 'Successfull'. But no in RWB->message monitoring->Integration Engine.
    The message does not reflect in SXMB_MONI either. All other scenarios are working fine except this.
    Also on checking the status of Sender communication channels in RWB it shows 'yellow' with reason channel may be inactive or unitialised. Whereas i have checked the communiaction channels in ID, their status is Active.
    Kindly advice a solution.
    Thanks in advance,
    Elizabeth.

    Hi,
    Your message processed from sender adapter. But seems there is some problem in reaching IE. Check out your wsdl file. check when define a web service, have u given right url. Is it like this
    http://[server]:[port]/XISOAPAdapter/MessageServlet?channel=:[sender communication channel service]:[sender communication channel name]
    and also make sure whether you given the right http port name, sender business system name , sender message interface in the following step of define web service.  and the port name is important you can fine http port for your PI server by going to SMICM tcode. in that press shift + F1
    You altova xmlspy software for testing purpose. You can download free 30 day trial version from altova website. once you installed in it.
    Go to xmlspy software. the menu you can find a "SOAP". underthis menu you can find create soap request. select that it will ask the wsdl file. give the wsdl file which you are created in define web service step. after that in xmlspy
    menu->soap->send request to server. then it ill ask user name and password to connect with the PI server.  then you will get the rfc response if u successfully connected with PI sever.
    for more details,
    http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=(J2EE3417200)ID1437447550DB12110496670821140257End?blog=/pub/wlg/4550
    but in this website they didnt use xmlspy some other third party software. good luck
    Regards,
    Balaji

  • Navigational Attribute Transported but not reflected.

    Hi,
    Couple of Navigational Attributes were transported from Dev to QA.
    But its not reflecting in QA now.
    In the Transport Log its giving the warning message message as,
    Navigation attribute ZABCLOC_ZZLINETYPE1 is deleted (not in characteristic ZABCLOC)
    Navigation attribute ZABCLOC_ZZABCLOC1 is deleted (not in characteristic ZABCLOC)
    Please let me know, how to transport the required Navigational Attr from one system to another system
    Thanks,
    Sowrabh

    if the primary object is not active then u will face issues..
    here as its quality u can try deleting the P table content of the Infoobject and then try activating that infoobject in the Quality system with the program.. some this like it starts with RSDG or RSVDIOBJ Activate.. u can try this in ur system...
    and then try reimporting it with the overwrite mode.. this is will solve ur issue.
    thanks
    Vishnu

  • Changes are not reflecting in Quality server

    Dear All,
    We have recently installed the Portal 7.4 servers.
    Currently we are having two servers Development and Quality.We have Configured NWDI using CMS and created tracks(Defined run-time systems as Development and Consolidation)  we are able to checkin the activities at NWDS , when we are selecting the component in the consolidation and on  click of  import we are getting the message as import finished, but the changes are not reflecting at the Quality server. i.e in earlier 7.01 version when we imported , the ear was deployed automatically in quality but it is not getting deployed in the new version 7.4
    Regards,
    Ramana.

    Dear Ervin/Jun Wu,
    Thanks for the Responce.
    As per the given link the track is configured  with out any issues (When we are clicking on Deployment in the Transport studio it is opening new tab like below and we are not having any errors when we click on Start deployment nothing is happening)
    http://hostname:50000/webdynpro/dispatcher/sap.com/tc~SL~CMS~WebUI/Deployer?BS=EPD_PRTADEV_C
    Please check the below log regarding deployment when we do the import
    SDM-deployment-notification  Log file.
    20141028125231 Info :Starting Step SDM-deployment-notification at 2014-10-28 12:52:31.0994 +5:00
    20141028125231 Info :Deployment is performed asynchronously.
    20141028125231 Info :Following DCs are marked for deployment (buildspace = EPD_PRTADEV_C):
    20141028125231 Info :
    20141028125231 Info :RequestId: 152
    20141028125231 Info :==> no resulting DCs for deployment
    20141028125231 Info :Follow-up requests:
    20141028125231 Info :
    20141028125231 Info :
    20141028125231 Info :Step SDM-deployment-notification ended with result 'success' at 2014-10-28 12:52:31.0995 +5:00
    Regards,
    Ramana.

  • Changes are not reflecting in the Quality

    Hi,
    These are standard SAP Components. I have imported the ESS and MSS Packages.
    Actually 3 Development Components are using in my application (Tra,TraTri and TraTre).Tra is the root DC.
    I have made  changes  to those objects and sucecessfully  build  without errors.
    After check-in all  activities  those activities are working fine in Development system.
    But those changes not refclecting in Quality system. So i was founded build log error in Consolidation.
    I was fixed the error in the Development system and build the components without build error.
    still now changes are not reflecting in the quality system.
    But out of 3 DCs(Tra,TraTri,TraTre) Tra and TraTre are working fine in Quality(changes are  reflecting) only problem in TraTri DC (changes are not reflecting in the Quality)..
    Please help me as soon as possible....Its very help full for me
    Regards
    Sudhakar Reddy A

    Hi Slava,
    very very thanks to you for the fast reply. but one thing in my side.
    When i release the activities after successful build, i export those changes and place them in the import queue of the consolidation system in the CMS.
    The export in the SAP NetWeaver Developer Studio packs all selected activities into a change request, and then places them in the import queue of the consolidation system.
    When the my system administrator imports this request into the consolidation system, the released changes are integrated into the DTR workspace of the consolidation system; the build server compiles the modified components.
    my system administrator findout build error in consolidation system:
    [wdgen] [Error]   .PersonnelNumberCheck: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   com.sap.xss.tra.tri.vc.changepersno.VcTriChangePersNo --> ContextModelNode PersonnelNumberCheck [modelClass]: The context model node has not been bound to a model class (Hint: A Context model node has to be bound to a model class or mapped to a model node of another controller.)
         [wdgen] [Error]   .PersonnelNumberCheck.I_Employeenumber: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   com.sap.xss.tra.tri.vc.changepersno.VcTriChangePersNo --> ContextModelNode PersonnelNumberCheck_Output [modelClass]: The context model node has not been bound to a model class (Hint: A Context model node has to be bound to a model class or mapped to a model node of another controller.)
         [wdgen] [Info]    com.sap.xss.tra.tri.vc.changepersno.VcTriChangePersNo --> ContextModelNode PersonnelNumberCheck_Output [supplyingRelationRole]: Supply function or supplying relation role missing (Hint: A child node which is not mapped must have either a supplying relation role or a supply function or one of its parent nodes must have a supply function.)
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.E_Name: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   com.sap.xss.tra.tri.vc.changepersno.VcTriChangePersNo --> ContextModelNode PersonnelNumberCheck_Et_Return [modelClass]: The context model node has not been bound to a model class (Hint: A Context model node has to be bound to a model class or mapped to a model node of another controller.)
         [wdgen] [Info]    com.sap.xss.tra.tri.vc.changepersno.VcTriChangePersNo --> ContextModelNode PersonnelNumberCheck_Et_Return [supplyingRelationRole]: Supply function or supplying relation role missing (Hint: A child node which is not mapped must have either a supplying relation role or a supply function or one of its parent nodes must have a supply function.)
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Type: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Message: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Log_No: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Field: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.System: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Message_V1: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Message_V3: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Message_V2: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Log_Msg_No: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Number: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Message_V4: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Parameter: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Id: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   .PersonnelNumberCheck.PersonnelNumberCheck_Output.PersonnelNumberCheck_Et_Return.Row: The mapping definition is inconsistent, the mapped context element does not exist.
         [wdgen] [Error]   com.sap.xss.tra.tri.vc.changepersno.VcTriChangePersNo --> ContextValueAttribute EmployeeName [readOnly]: The context attribute has to be read-only. (Hint: As the mapped context attribute is read-only this attribute has to be read-only, too.)
    I was fix error in my development system with out any build errors. still now not working ...pls give me any suggestion.i dont have access to consolidation system.
    Regards,
    Sudhakar

Maybe you are looking for

  • Is it possible to Delete BP's from production system

    Hi All, Is it possible to delete old BP numbers from production system? if yes please let me know the procedure and impact on the system. if no please let me know ,how to prevent user from entering old BP number into order. Thanks in advance for your

  • Oracle Concurrent Programs remains in Running Normal State for long time.

    Hi All, We have encountered an issue where Concurrent Programs remains in the state of Running Normal. Importantly when we try to cancel those requests , it displays a pop-up message "*Could not lock request*". Also, when we click on Diagnostics butt

  • IPhoto 6.0.5 crashes - Quicktime

    Just want to let people know that I began having crashes with iPhoto after the recent updates to IPhoto, OSX and Quicktime. This happened when stepping between photos using the arrow keys. Rolling back to Quicktime 7.0.1 solved the problem as recomme

  • Turn off Anti-Virus Warning

    When I do a Toshiba Laptop Checkup on my NB505 it says I have no Anti Virus programs but I do. I installed Avast and it is running. Is there a way for me to tell the computer it's there so the warning stops coming up wanting me to install Norton whic

  • One of the sender SOAP channel is inactive by itself in the CC monitoring

    HI All sender soap channel is showing inactive in the RWB. But when i check the status in Integration directory it is active. It got inactive by itself . can anybody help me  how i can activate it. regards sandep