New log on session

Hello experts,
I'll be very grateful if anybody could help me with this issue. I would like check in abap transaction if in particular account is already started at least one session. If not I would like to get password from user and start new session on this account without using SAP Logon.
Thanks in advance.

Hi,
what is the name of the table that contain info about number of started user sessions? When you log on through SAP Logon and if you already have started session system ask if you would like to kill last session. That's mean this kind of information is stored somewhere in the system tables.
Why not try this way?
1. You have list of  newly created users
2.  Find out which transaction the user has excuted for that day(SM19,SM20,STAT, ST03)
without usage of SAP Logon?
1. User type
2. RFC users (not dialog users)
Table for storing SAP session count
Thanks,
Sri
Edited by: sri on Aug 3, 2010 9:32 AM

Similar Messages

  • Can I disable logging for session in Oracle 10g?

    I use a procedure to delete a lot of row for an application repeatedly. Because the DELETE statement is time consuming and the data don't need to be archived, I decide to use the nologging option.
    How to do it?
    What is the best choice? Can I disable logging from session in Oracle 10g?
    Thank you
    Edited by: jetq on Jul 23, 2009 9:46 AM

    Hi,
    "Delete" without generating redo-log is not possible.
    If you are on 10g, one way of making this thing efficiant is partition the table, with range-list partitioning. Partitioning existing table will be an excercise in itself, but that will be one time activity.
    In partitioned (or sub-partitioned) tables, you can truncate a partition (or subpartition). That won't generate any redo log (or very very less redo log) and that runs in seconds.
    In your case if you range partition INCOMING table by datetime (1 partition per day) and list sub-partition it by STATUS, that would help.
    Another approach is, if you are deleteing, say 80% records every day and leaving 20% (or very less) records. What you can do is, partition the table only by range on datetime. Then, every time you want to delete data, copy the rows you want to keep in some other table (or temporary table), truncate partition for that day and insert rows back (which you want to keep).
    I have done a similar thing and it works very quickly and generates very less redo log. Redo log generated in case of truncating partition or creating new partitions is just for Oracle internal commands (like data dictionary update etc).
    Have fun.

  • Checkpoint not complete;;cannot allocate new log ;;; PLZ HELP ME

    Hi all,
    We are working on Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 on a Redhat Linux Server platform.
    We are facing the following problem in the alert.log file :
    Wed Aug 22 02:58:57 2007
    Thread 1 cannot allocate new log, sequence 43542
    Checkpoint not complete
    Current log# 1 seq# 43541 mem# 0: /u01/oradata/DB01/redo01.log
    Thread 1 advanced to log sequence 43542
    Current log# 4 seq# 43542 mem# 0: /u01/oradata/DB01/redo04.log
    Current log# 4 seq# 43542 mem# 1: /u01/oraindx/DB01/redo04.log
    Wed Aug 22 03:00:00 2007
    Thread 1 advanced to log sequence 43543
    Current log# 5 seq# 43543 mem# 0: /u01/oradata/DB01/redo05.log
    Current log# 5 seq# 43543 mem# 1: /u01/oraindx/DB01/redo05.log
    Wed Aug 22 03:01:00 2007
    Thread 1 cannot allocate new log, sequence 43544
    Checkpoint not complete
    Current log# 5 seq# 43543 mem# 0: /u01/oradata/DB01/redo05.log
    Current log# 5 seq# 43543 mem# 1: /u01/oraindx/DB01/redo05.log
    Thread 1 advanced to log sequence 43544
    Current log# 6 seq# 43544 mem# 0: /u01/oradata/DB01/redo06.log
    Current log# 6 seq# 43544 mem# 1: /u01/oraindx/DB01/redo06.log
    Wed Aug 22 03:01:26 2007
    Thread 1 advanced to log sequence 43545
    Current log# 2 seq# 43545 mem# 0: /u01/oradata/DB01/redo02.log
    Thread 1 advanced to log sequence 43546
    Current log# 3 seq# 43546 mem# 0: /u01/oradata/DB01/redo03.log
    Thread 1 advanced to log sequence 43547
    Current log# 1 seq# 43547 mem# 0: /u01/oradata/DB01/redo01.log
    Wed Aug 22 03:01:38 2007
    Thread 1 cannot allocate new log, sequence 43548
    Checkpoint not complete
    Current log# 1 seq# 43547 mem# 0: /u01/oradata/DB01/redo01.log
    I know that this message indicates that Oracle wants to reuse a redo log file, but
    the current checkpoint position is still in that log. In this case, Oracle must
    wait until the checkpoint position passes that log. Because the
    incremental checkpoint target never lags the current log tail by more than 90%
    of the smallest log file size, this situation may be encountered :
    1-if DBWR writes too slowly,
    or
    2-if a log switch happens before the log is completely full,
    or
    3-if log file sizes are too small.
    I read some posts in this forum regarding this error but sincerly i don't know how to find the exact cause of this error? Maybe Should I add new redo files or one new redo group? I don't know how to resolve it :( ;;;
    such as I have 6 redo files 3 of them 5MB size and the others 3 files 10MB size ;
    Thank you,
    Regards,
    Message was edited by:
    HAGGAR

    Make DBWR write more aggressively - as you are on 9i the parameter I would use is FAST_START_MTTR_TARGET=(how long you want recovery to take in seconds), the lower that number, the more aggressively DBWR has to write to keep up with the target, the advantage of this is that by the time LGWR comes to overwrite the redo log file, the chances are that DBWR has already written the "high scn#" (and beyond) from the checkpoint q.
    -- This disadvantage is that you will get more I/O to your disks.
    2. Create more redo log file groups - this will give DBWR more time to write before LGWR tries to overwrite a particular redolog file, again the chances are that the extra (6th) or (7th) group will give CKPT enough time to completely checkpoint beyond the "highest scn#" before that group is again required..
    Which one to go for.. well that's up to you and your setup, if you have an I/O bound system then 2. would be better for you, as 1. will just increase your I/O problem, however if physical space is an issue and I/O isn't then 1. might be better (with the added advantage that instance recovery will also be faster).
    Sorry for the training session, but as with everything to do with Oracle, there is rarely one solution that apply to everyone...
    Gopu

  • Cannot allocate new log&Private strand flush not complete

    There are two Oracle10g(10.2.0.5) Database running on SUN M5000(SPARC64 VII 2.53 GHz Quad-Core Processors*4/32GB Memory).
    The first Oracle10g DB is running with 8GB SGA and 1GB PGA, the second Oracle10g DB is runing with 7GB SGA and 2GB PGA.
    We encountered a issue on the second Oracle DB. When we manually switch the online redo logs with an alter system command as below.
    SQL> alter system switch logfile;
    The Alert log will display the following messages:
    Thread 1 cannot allocate new log, sequence xxx
    Private strand flush not complete .
    All other Oracle10g DB will not occur "Private strand flush not complete" messages except for the second Oracle10g DB. I want to know the detail reason.
    I also saw two Doc ID:[ID 372557.1] and [ID 435887.1] from Oracle Metalink. But I am not satisfacted with what they wrote.
    My last target to resolve the issue.
    Anyone help me ?
    I am kylix.
    [email protected]

    hi
    Hemant
    iam very new to this field,since 20 days i am having one big problem in my databases,i have 2 database in my windows server environment ,recently i have upgrade my both the database from 10g to 11g.Why i upgrade means in 10g iam getting error ora-07445 access violation,in my alertlog file (when iam going to take full database backup through export getting this error in logfile and export stops with error) so i thought to apply the patch,but with discussion with my manager and team, we all thought this is the time to upgrade so we changed to 11g
    then 11g upgrade is 100% successfully completed, to verify the export on 11g ,i thought to take logical backup of my 2 database's
    C:\Users\Administrator>exp system/password@tnsname file=full_05_1_2013.dmp log=05_1_2013.log full=y
    Export: Release 11.2.0.1.0 - Production on Sat Jan 19 09:18:06 2013
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
    server uses AR8MSWIN1256 character set (possible charset conversion)
    About to export the entire database ...
    . exporting tablespace definitions
    . exporting profiles
    . exporting user definitions
    . exporting roles
    . exporting resource costs
    . exporting rollback segment definitions
    . exporting database links
    . exporting sequence numbers
    . exporting directory aliases
    . exporting context namespaces
    . exporting foreign function library names
    . exporting PUBLIC type synonyms
    . exporting private type synonyms
    . exporting object type definitions
    . exporting system procedural objects and actions,
    what happening here is it just coming to this point and stopping here for hours............more than 10 hours (I have to kiill this session manually).I tried with single user its working but when i take full backup of database export , then it not working,,, i have cross check every possibilities by giving full privilages,space check,and many other but no solution, i was unable to take full backup ,(but i can take single user backup)
    i have even applied bundle patch (Oracle® Database Server Version 11.2.0.1 Patch 16) 100% sucessfully but the error continious)))))))))))))))))))))))))))))))))))
    NOTE:___one more twist here is , when i bounce the database then i can take full database backup with out error and warnings for 6 to 7 times then again at 8th time its again stopping at above point.......................................................(when i need full logical backup again i need to bounce the db)
    what for its stopping here, why it is not going forward,,, its issue with both the database's same,
    NOTE:ERROR IN MY ALERTLOG FILE AND TRACEFILE IS
    trace file d:\app\admin\diag\rdbms\memfa\memfa\trace\memfa_j000_7844.trc
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Windows NT Version V6.0 Service Pack 2
    CPU : 16 - type 8664, 16 Physical Cores
    Process Affinity : 0x0x0000000000000000
    Memory (Avail/Total): Ph:1148M/16379M, Ph+PgF:17118M/32914M
    Instance name: abha210
    Redo thread mounted by this instance: 1
    Oracle process number: 22
    Windows thread id: 7844, image: ORACLE.EXE (J000)
    *** 2013-01-17 12:00:00.124
    *** SESSION ID:(196.16104) 2013-01-17 12:00:00.124
    *** CLIENT ID:() 2013-01-17 12:00:00.124
    *** SERVICE NAME:(SYS$USERS) 2013-01-17 12:00:00.124
    *** MODULE NAME:(DBMS_SCHEDULER) 2013-01-17 12:00:00.124
    *** ACTION NAME:(PURGE_LOG) 2013-01-17 12:00:00.124
    purge queue_table SYS.SCHEDULER_FILEWATCHER_QT
    purge queue_table SYS.SCHEDULER$_EVENT_QTAB
    alertlogfile___________________________
    Thread 1 cannot allocate new log, sequence 123
    Private strand flush not complete
    Current log# 4 seq# 122 mem# 0: D:\APP\ADMIN\ORADATA\memfa\REDO04.LOG
    Thread 1 advanced to log sequence 123 (LGWR switch)
    Current log# 5 seq# 123 mem# 0: D:\APP\ADMIN\ORADATA\memfa\REDO05.LOG
    already i have change my redo log to 50 to 500 each(3) and increase retention time too
    i have tried for more than 50 time by running full database export command ,but its not working ,sometimes 24 hours till at that point only(if i bounce the both the database's then,full database export completes in 9 min(successfully without error and without warnings
    these are veru new databases size nearly 8gb each
    please assist/help me in solving this issue,

  • Thread 1 cannot allocate new log

    hi,
    i have database 10g and with following redo log
    8,209715200,"/oracle/app/oradata/ORCA/redo8.log","INACTIVE"
    6,209715200,"/oracle/app/oradata/ORCA/redo6.log","INACTIVE"
    5,209715200,"/oracle/app/oradata/ORCA/redo5.log","CURRENT"
    7,209715200,"/oracle/app/oradata/ORCA/redo7.log","INACTIVE"
    and alert_ORCA.log=
    Current log# 8 seq# 85 mem# 0: /oracle/app/oradata/ORCA/redo8.log
    Sat May 30 01:11:03 2009
    Starting control autobackup
    Sat May 30 01:11:08 2009
    Errors in file /oracle/app/admin/ORCA/udump/orca_ora_12223.trc:
    Sat May 30 01:11:08 2009
    Errors in file /oracle/app/admin/ORCA/udump/orca_ora_12223.trc:
    Sat May 30 01:11:08 2009
    Errors in file /oracle/app/admin/ORCA/udump/orca_ora_12223.trc:
    Control autobackup written to DISK device
         handle '/backup_rman/c-626207015-20090530-00'
    Sat May 30 01:11:27 2009
    Thread 1 cannot allocate new log, sequence 86
    Private strand flush not complete
    Current log# 8 seq# 85 mem# 0: /oracle/app/oradata/ORCA/redo8.log
    Thread 1 advanced to log sequence 86
    Current log# 6 seq# 86 mem# 0: /oracle/app/oradata/ORCA/redo6.log
    Sat May 30 01:12:00 2009
    Starting control autobackup
    Sat May 30 01:12:04 2009
    Errors in file /oracle/app/admin/ORCA/udump/orca_ora_12223.trc:
    Sat May 30 01:12:04 2009
    Errors in file /oracle/app/admin/ORCA/udump/orca_ora_12223.trc:
    Sat May 30 01:12:04 2009
    Errors in file /oracle/app/admin/ORCA/udump/orca_ora_12223.trc:
    Control autobackup written to DISK device
    and orca_ora_12223.trc=
    ORACLE_HOME = /oracle/app/product/10g/db_1
    System name: Linux
    Node name: linserver
    Release: 2.6.9-42.0.0.0.1.ELsmp
    Version: #1 SMP Sun Oct 15 14:02:40 PDT 2006
    Machine: i686
    Instance name: ORCA
    Redo thread mounted by this instance: 1
    Oracle process number: 41
    Unix process pid: 12223, image: oracle@linserver (TNS V1-V3)
    *** ACTION NAME:(0000001 STARTED1) 2009-05-30 01:00:18.444
    *** SERVICE NAME:(SYS$USERS) 2009-05-30 01:00:18.444
    *** SESSION ID:(678.16384) 2009-05-30 01:00:18.444
    ksfqxc:ctx=0xb72646f4 flags=0x30000000 dev=0x0 lbufsiz=1048576 bp=0xcc16258
    *** ACTION NAME:(0000016 STARTED16) 2009-05-30 01:00:20.097
    *** MODULE NAME:(backup full datafile) 2009-05-30 01:00:20.097
    ksfqxc:ctx=0xb7251f8c flags=0x32000000 dev=0x0 lbufsiz=1048576 bp=0xcc15ed4
    ksfqfcrx:ctx=0xb72646f4 filename=/backup_rman/66kg9id4_1_1 blksiz=8192 startblk=
    1
    ksfqfret:ctx=0xb7251f8c filename=/bck/undo blksiz=8192 startblk=1
    ksfqrd:ctx=0xb7251f8c nblocks=0 wait=0
    ksfqrd:buf=0xb67fd000 offset=1 blksread=128
    ksfqwr:ctx=0xb72646f4 nblocks=128 buf=0xcc16290
    And i get Thread 1 cannot allocate new log ,
    what should i do ?

    Please check the ML note, *372557.1* for the mentioned message.
    HTH
    Aman....

  • Error – SAML Single Logout request does not correspond to the logged-in session participant

    We are relatively new to ADFS, having set up working rp-trusts with three partners in the last few months.  Our 4th partner is proving problematic.  Single sign in works, but the ADFS
    responds the single logout request from the RP with a status of Requester.  The ADFS event log shows
    The SAML Single Logout request does not correspond to the logged-in session participant.
    Requestor: https://test-sso.rp.com/fed/sp
    Request name identifier: Format: urn:oasis:names:tc:SAML:2.0:nameid-format:persistent, NameQualifier: http://fs.idp.com/adfs/services/trust SPNameQualifier:
    https://test-sso.rp.com/fed/sp, SPProvidedId: 
    Logged-in session participants:
    Count: 1, [Issuer: https://test-sso.crmondemand.com/fed/sp, NameID: (Format: , NameQualifier: 
    SPNameQualifier: , SPProvidedId: )] 
    This request failed.
    User Action
    Verify that the claim provider trust or the relying party trust configuration is up to date. If the name identifier in the request is different from the name identifier
    in the session only by NameQualifier or SPNameQualifier, check and correct the name identifier policy issuance rule using the AD FS 2.0 Management snap-in.
    The LogoutRequest looks like this
    <samlp:LogoutRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
    Destination="https://fs.timken.com/adfs/ls/"
                    ID="id-HAScmHCfwfuYk76bce6YBfO2uOM-"
    IssueInstant="2013-01-14T13:24:04Z"
    Version="2.0">
    . . . cert, etc. omitted . . .
    <saml:NameID xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
    Format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent"
    NameQualifier="http://fs.idp.com/adfs/services/trust"
    SPNameQualifier="https://test-sso.rp.com/fed/sp"
    >jsmith</saml:NameID>
    <samlp:SessionIndex>_df13d31b-162e-42e1-8331-f36be6bf1194</samlp:SessionIndex>
    </samlp:LogoutRequest>
    The session index and the username in NameID matches the Response we got from our AuthRequest.  I don't know how to figure out what ADFS thinks does not match. 
    Any suggestions would be appreciated.
    For completeness sake, the Response to AuthRequest looked like this.
    <Subject>
                <NameID>jsmith</NameID>
                <SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">
                    <SubjectConfirmationData NotOnOrAfter="2013-01-14T13:28:52.199Z"
                                             Recipient="https://test-sso.rp.com/fed/sp/authnResponse20"
                                             />
                </SubjectConfirmation>
            </Subject>
            <Conditions NotBefore="2013-01-14T13:23:52.183Z"
                        NotOnOrAfter="2013-01-14T14:23:52.183Z"
                        >
                <AudienceRestriction>
                    <Audience>https://test-sso.rp.com/fed/sp</Audience>
                </AudienceRestriction>
            </Conditions>
            <AuthnStatement AuthnInstant="2013-01-14T13:10:43.826Z"
                            SessionIndex="_df13d31b-162e-42e1-8331-f36be6bf1194"
    >

    Okay, here are the relevant SAML messages.
    The <AuthnRequest>
    <samlp:AuthnRequest ID="_ced78e65-14d2-4c4d-8417-51f664a9e2e3"
                        Version="2.0"
                        IssueInstant="2013-02-04T13:29:20.887Z"
                        Destination="https://fs.timken.com/adfs/ls/"
                        Consent="urn:oasis:names:tc:SAML:2.0:consent:unspecified"
                        xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"
                        >
        <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">http://fs.timken.com/adfs/services/trust</Issuer>
        <Conditions xmlns="urn:oasis:names:tc:SAML:2.0:assertion">
            <AudienceRestriction>
                <Audience>https://test-sso.salesdemand.com/fed/sp</Audience>
            </AudienceRestriction>
        </Conditions>
    </samlp:AuthnRequest>The AuthnRequest Response<samlp:Response ID="_890f3128-6cae-414e-8272-30cde3bda94a"                Version="2.0"                IssueInstant="2013-02-04T13:29:29.748Z"                Destination="https://test-sso.salesdemand.com/fed/sp/authnResponse20"                Consent="urn:oasis:names:tc:SAML:2.0:consent:unspecified"                xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"                >    <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">http://fs.timken.com/adfs/services/trust</Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success" />    </samlp:Status>    <Assertion ID="_82f82c5c-2653-4e18-9308-349ebeb67743"               IssueInstant="2013-02-04T13:29:29.748Z"               Version="2.0"               xmlns="urn:oasis:names:tc:SAML:2.0:assertion"               >        <Issuer>http://fs.timken.com/adfs/services/trust</Issuer>        <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">            <ds:SignedInfo>                <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />                <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1" />                <ds:Reference URI="#_82f82c5c-2653-4e18-9308-349ebeb67743">                    <ds:Transforms>                        <ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />                        <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />                    </ds:Transforms>                    <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />                    <ds:DigestValue>RxZZLlbdh5eD6Ht4+aVna3Rtbnc=</ds:DigestValue>                </ds:Reference>            </ds:SignedInfo>            <ds:SignatureValue>Es8LAN9noqGIJEbgZe/...XW8LAv5Mgr3tOXpHRlcsJNss/A==</ds:SignatureValue>            <KeyInfo xmlns="http://www.w3.org/2000/09/xmldsig#">                <ds:X509Data>                    <ds:X509Certificate>MIIFDDCCA/SgAwIB...</ds:X509Certificate>                </ds:X509Data>            </KeyInfo>        </ds:Signature>        <Subject>            <NameID>mooreta</NameID>            <SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <SubjectConfirmationData NotOnOrAfter="2013-02-04T13:34:29.748Z"                                         Recipient="https://test-sso.salesdemand.com/fed/sp/authnResponse20"                                         />            </SubjectConfirmation>        </Subject>        <Conditions NotBefore="2013-02-04T13:29:29.732Z"                    NotOnOrAfter="2013-02-04T14:29:29.732Z"                    >            <AudienceRestriction>                <Audience>https://test-sso.salesdemand.com/fed/sp</Audience>            </AudienceRestriction>        </Conditions>        <AuthnStatement AuthnInstant="2013-02-04T13:29:29.545Z"                        SessionIndex="_82f82c5c-2653-4e18-9308-349ebeb67743"                        >            <AuthnContext>                <AuthnContextClassRef>urn:federation:authentication:windows</AuthnContextClassRef>            </AuthnContext>        </AuthnStatement>    </Assertion></samlp:Response>The LogoutRequest<samlp:LogoutRequest xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"                     Destination="https://fs.timken.com/adfs/ls/"                     ID="id-uvoTioVCLdMycE88o-6CU5RrSNM-"                     IssueInstant="2013-02-04T13:29:57Z"                     Version="2.0"                     >    <saml:Issuer xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"                 Format="urn:oasis:names:tc:SAML:2.0:nameid-format:entity"                 >https://test-sso.salesdemand.com/fed/sp</saml:Issuer>    <dsig:Signature xmlns:dsig="http://www.w3.org/2000/09/xmldsig#">        <dsig:SignedInfo>            <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />            <dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1" />            <dsig:Reference URI="#id-uvoTioVCLdMycE88o-6CU5RrSNM-">                <dsig:Transforms>                    <dsig:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature" />                    <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#" />                </dsig:Transforms>                <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1" />                <dsig:DigestValue>ZT0yQqiaL2dD2a7rt6ywJ9EoM1I=</dsig:DigestValue>            </dsig:Reference>        </dsig:SignedInfo>        <dsig:SignatureValue>Z7F7zYS31y1K48FbUHevJT86+txOlPM9awlHiMNj1TiMxRAEVz1rOj2uG0oVMd7NkblkneCrE8aVtJuebdUY4Q0DAcXR8lSTuNEFocT2R6eCIwQb48xQqQMs8ZE6siPsPFMS+QAhpgDom/IY61L/.../NNxVg==</dsig:SignatureValue>        <dsig:KeyInfo>            <dsig:X509Data>                <dsig:X509Certificate>MIIFxTCCBK2gAwIBAgIQAN+.../G6p95pNm1ZAqroUjufLeHO4q34Mx3xNyw0tmyjmWgkxY11Pa+M0gCeLOdLzxafIOXUFXOhKfOUg4Jp4S+/sCVcd9fBDPvfEHSr8uMmQC2IdQaRE7IvZdRF0OUP+l1MpRBkMsy98hPXTBK6n1ivklOxzmWie88jav8gzjWhwQC5Ia2/JNYxVBkPsNkRw86n8KBnlsumU9EV0dAeXTOaehKtG+RNnD1Gt4Y34TQccaIbf7OTLisY4kMkjZbRu3sJnX9KjM=</dsig:X509Certificate>            </dsig:X509Data>        </dsig:KeyInfo>    </dsig:Signature>    <saml:NameID xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"                 Format="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent"                 NameQualifier="http://fs.timken.com/adfs/services/trust"                 SPNameQualifier="https://test-sso.salesdemand.com/fed/sp"                 >mooreta</saml:NameID>    <samlp:SessionIndex>_82f82c5c-2653-4e18-9308-349ebeb67743</samlp:SessionIndex></samlp:LogoutRequest>The LogoutRequest Response<samlp:LogoutResponse ID="_bf7199a8-3248-4201-9ca4-609bec5404d6"                      Version="2.0"                      IssueInstant="2013-02-04T13:29:59.076Z"                      Destination="https://test-sso.salesdemand.com/fed/sp/samlv20"                      Consent="urn:oasis:names:tc:SAML:2.0:consent:unspecified"                      InResponseTo="id-uvoTioVCLdMycE88o-6CU5RrSNM-"                      xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"                      >    <Issuer xmlns="urn:oasis:names:tc:SAML:2.0:assertion">http://fs.timken.com/adfs/services/trust</Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Requester" />    </samlp:Status></samlp:LogoutResponse>The ADFS Error Log EntryThe SAML Single Logout request does not correspond to the logged-in session participant. Requestor: https://test-sso.salesdemand.com/fed/sp Request name identifier: Format: urn:oasis:names:tc:SAML:2.0:nameid-format:persistent, NameQualifier: http://fs.timken.com/adfs/services/trust SPNameQualifier: https://test-sso.salesdemand.com/fed/sp, SPProvidedId:  Logged-in session participants: Count: 1, [Issuer: https://test-sso.salesdemand.com/fed/sp, NameID: (Format: , NameQualifier:  SPNameQualifier: , SPProvidedId: )]  This request failed. User Action Verify that the claim provider trust or the relying party trust configuration is up to date. If the name identifier in the request is different from the name identifier in the session only by NameQualifier or SPNameQualifier, check and correct the name identifier policy issuance rule using the AD FS 2.0 Management snap-in.

  • SM35P - Logs without sessions

    Dear All,
    In SM35P (Batch Input - Log Overview), there are two options,
    1. Logs with session
    2. Logs without session
    To my understanding, Session will be created with BDC (Session method). Please explain me what is meant by "Logs without session" & in what scenarios, these will be used & so on. Please throw some light on this query.
    Thanks,
    Vamsi

    Thanks guys.
    Not sure how to use RSTS0030 - there appears to be no way to select individual entries once the list has been displayed, and clicking 'delete selected' doesn't do anything...
    However, we have managed to clear these by doing the following:
    Go to the "Logs without sessions" tab. Double click on each item and make a note of the TemSe ID.
    Now open a new session (don't close this one) and run transaction SP12. Select Goto-> TemSe contents from the menu and enter the TemSe ID from before. Make sure that the date is set to a valid date. Click 'list objects'. Select the object and click 'delete'.
    Now go back to your SM35 screen. Come back out so you are on the 'logs without sessions' tab then right click and choose to APQL consistency. It might take quite a long while to bring up.
    Search for the TemSe ID(s) in the list, click the tick box for each one, and then delete selections.
    Go back and they should have gone (may need to refresh).
    Ross

  • Not able to add new log file to the 11g database.

    Hi DBA's
    I am not able to add the log file i am getting error while adding the database.
    SQL> alter database add logfile group 3 ('/oracle/DEV/db/apps_st/data/log03a.dbf','/oracle/DEV/db/apps_st/data/log03a.dbf') size 50m reuse;
    alter database add logfile group 3 ('/oracle/DEV/db/apps_st/data/log03a.dbf','/oracle/DEV/db/apps_st/data/log03a.dbf') size 50m reuse
    ERROR at line 1:
    ORA-01505: error in adding log files
    ORA-01577: cannot add log file '/oracle/DEV/db/apps_st/data/log03a.dbf' - file
    already part of database
    SQL> select a.group#, member, a.status from v$log a, v$logfile b where a.group# = b.group# order by 1;
    GROUP# MEMBER STATUS
    1 /oracle/DEV/db/apps_st/data/log01a.dbf ACTIVE
    1 /oracle/DEV/db/apps_st/data/log01b.dbf ACTIVE
    2 /oracle/DEV/db/apps_st/data/log02a.dbf CURRENT
    2 /oracle/DEV/db/apps_st/data/log02b.dbf CURRENT
    Kindly help me to add the new log file to my database.
    Thanks,
    SG

    Hi Sawwan,
    V$LOGMEMBER was written in the document,
    I query the log members as bellow
    1)select a.group#, member, a.status from v$log a, v$logfile b where a.group# = b.group# order by 1;
    GROUP# MEMBER STATUS
    1 /oracle/DEV/db/apps_st/data/log01a.dbf INACTIVE
    1 /oracle/DEV/db/apps_st/data/log01b.dbf INACTIVE
    2 /oracle/DEV/db/apps_st/data/log02a.dbf CURRENT
    2 /oracle/DEV/db/apps_st/data/log02b.dbf CURRENT
    2)SQL> select group#,member,status from v$logfile;
    GROUP# MEMBER STATUS
    2 /oracle/DEV/db/apps_st/data/log02a.dbf
    2 /oracle/DEV/db/apps_st/data/log02b.dbf
    1 /oracle/DEV/db/apps_st/data/log01a.dbf
    1 /oracle/DEV/db/apps_st/data/log01b.dbf
    But i am littile bit confused that there is no group or datafile called " Group 3 and log03a.dbf" as per the above query, how can i drop tease group and datafile.
    and i crossverified in the data top the files are exist or not but those are not existing. but still i am getting the same error that i can't create that already exist.
    can issue the bellow queris to drop those group which i dont think so it will exist?
    SQL>alter database drop logfile group 3;
    Thanks in advance.
    Regards,
    SG

  • Problems happened when I changed my Apple ID to a new log in. Aperture is no longer on the MAC.

    Several months ago I change my Apple ID but not any of the hardware and products associated with it. I purchased Aperture under the old log in and have been using it regularly on my MAC. Last week my harddrive on the MAC crashed. When I went to reload all the purchases - everything under the new log in loaded back fine but everything from the old ID was no longer available. Aperture is gone...
    Attempts to log in using the old log in information has failed as well as countless hours working with customer service.
    I will never change my ID again - but that is not helping now.
    Does anyone have any other suggestions?

    Jim,
    are you sure, that you bought Aperture from the App Store, and not from the Apple Store?
    If you bought from the Apple Store, you will have received a serial number. Do you still have it? Then download the Aperture 3 Trial and reinstall - but do this only, if you have a serial number!
    Go to the Applications folder and delete Aperture.
    Download the Aperture 3.1 Trial  from here: Aperture 3.1 Trial
    Open the Aperture Trial disc image.
    Double-click the "Aperture Trial.mpkg" file.
    Follow the onscreen instructions to install Aperture.
    After the installation completes, choose Software Update from the Apple () menu.
    Install the Aperture 3.4 update from the list of available updates.
    If you definitely bought from the AppStore and not AppleStore - what happens when you try to log into the AppStore - in what way does it fail?
    Regards
    Leónie

  • In Noarchivelog mode "Thread 1 cannot allocate new log, sequence 3298"

    Hi,My database mode is noarchivelog but i see my alert.log "Thread 1 cannot allocate new log, sequence 3298"
    what it`s mean?
    I have enough disk space.

    This is very similar to the your still open thread cannot allocate new log, sequence
    Better protocol would surely be to acknowledge the references given then, and why you think they do or do not apply.
    Please get one of these threads marked as answered and continue on the other.
    Please also indicate:
    1) is your database hung.
    2) Output form this query:
    select log_mode from v$database;
    3) Database Version:
    4) Are there any other messages in alert log.
    5) In sqlplus connected as SYSDBA show output from
    archive log list
    Rgds - bigdelboy.

  • Oracle 10G Checkpoint not complete Cannot allocate new log

    We have an oracle 10G database that has 4 groups of 200M redo logs. We are constantly seeing Checkpoint not complete followed by Cannot allocate new log.. messages in the alert log. When I look at the v$instance_recovery table I see this:
    SQL> select WRITES_MTTR, WRITES_LOGFILE_SIZE, WRITES_LOG_CHECKPOINT_SETTINGS, WRITES_OTHER_SETTINGS,
    2 WRITES_AUTOTUNE, WRITES_FULL_THREAD_CKPT from v$instance_recovery;
    WRITES_MTTR WRITES_LOGFILE_SIZE WRITES_LOG_CHECKPOINT_SETTINGS
    WRITES_OTHER_SETTINGS WRITES_AUTOTUNE WRITES_FULL_THREAD_CKPT
    0 0 66329842
    0 309461 41004
    It seems most of our checkpointing is being caused by the log_checkpoint_interval being set to 10000, which means we are doing 20 checkpoints during one redo log since our OS blocksize is 1024. What I'm thinking about doing is first adding two more redo log groups, and then either increasing the log_checkpoint_interval to 40000 (5 checkpoints for one redo log) or unsetting log_checkpoint_interval and setting fast_start_mttr_target to 600. Which would be the recommended parameter change? I think Oracle recommends using the fast_start_mttr_target. Are there any other suggestions?

    Hi
    >>unsetting log_checkpoint_interval and setting fast_start_mttr_target to 600.
    its a better idea,look on it
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams068.htm#REFRN10058
    Just set fast_start_mttr_target parameter.it will resolve ur problem.
    Thanks
    Kuljeet

  • Thread 1 cannot allocate new log, sequence 714 - Checkpoint not complete

    I'm working in a Oracle 10g database, running on RHEL. The alert_*.log file is putting out this type of text for the past few months:
    Thread 1 advanced to log sequence 713 (LGWR Switch)
    Current log#2 seq# 713 mem# 0: /dbfiles/..../onlinelog/filename.log
    Current log #2 ..........
    Thread 1 cannot allocate new log, sequence 714
    Checkpoint not complete
    Any ideas to get me pointed in the right direction is appreciated,
    Thank you in advance,
    Wes

    The immediate fixes Vishal suggests are the standard answer, but there is more to it, as the MOS doc he references shows. Additional questions I would ask:
    How long is the delay between the inability to switch and when it actually switches? How many datafiles do you have? What is waiting in the db?
    In general, I find making log files sized for the rare massive updates, then using the built-in timeouts to switch every 20 or 30 minutes (depending on management recoverability expectations), helps avoid some of these problems. More specifically, run a statspack or awr (if licensed) report for maybe an hour during heavy times, and see what it thinks the waits are. Tuning is an iterative process, and sometimes different things can cause similar symptoms. Sometimes fixing the biggest problem (which is often poor SQL) can radically change the symptoms observed. Other times you find you have dying hardware, or someone messed up a switch configuration.

  • Thread 1 cannot allocate new log, sequence in Alert log

    Could someone help, I am getting the below message in my alert log
    System: AIX6.1 - Oracle 11r2
    GROUP#     THREAD#     MEMBER                                  ARCHIVED     STATUS             MB
    1     1     +DATA01/mydb/redolog_group1_member1     NO         INACTIVE             1536
    1     1     +DATA01/mydb/redolog_group1_member2     NO         INACTIVE             1536
    2     1     +DATA01/mydb/redolog_group2_member2     NO         CURRENT          1536
    2     1     +DATA01/mydb/redolog_group2_member1     NO         CURRENT          1536
    3     1     +DATA01/mydb/redolog_group3_member2     NO         INACTIVE             1536
    3     1     +DATA01/mydb/redolog_group3_member1     NO         INACTIVE             1536
    Tue Jul 10 18:37:48 2012
    Thread 1 advanced to log sequence 28831 (LGWR switch)
      Current log# 1 seq# 28831 mem# 0: +DATA01/mydb/redolog_group1_member1
      Current log# 1 seq# 28831 mem# 1: +DATA01/mydb/redolog_group1_member2
    Tue Jul 10 19:12:01 2012
    Thread 1 advanced to log sequence 28832 (LGWR switch)
      Current log# 2 seq# 28832 mem# 0: +DATA01/mydb/redolog_group2_member1
      Current log# 2 seq# 28832 mem# 1: +DATA01/mydb/redolog_group2_member2
    Tue Jul 10 19:39:00 2012
    Thread 1 cannot allocate new log, sequence 28833
    Private strand flush not complete
    Tue Jul 10 19:39:18 2012
      Current log# 2 seq# 28832 mem# 0: +DATA01/mydb/redolog_group2_member1
      Current log# 2 seq# 28832 mem# 1: +DATA01/mydb/redolog_group2_member2
    Tue Jul 10 19:41:21 2012
    Thread 1 advanced to log sequence 28833 (LGWR switch)
      Current log# 3 seq# 28833 mem# 0: +DATA01/mydb/redolog_group3_member1
      Current log# 3 seq# 28833 mem# 1: +DATA01/mydb/redolog_group3_member2
    Tue Jul 10 20:15:28 2012
    Thread 1 cannot allocate new log, sequence 28834
    Private strand flush not complete
      Current log# 3 seq# 28833 mem# 0: +DATA01/mydb/redolog_group3_member1
      Current log# 3 seq# 28833 mem# 1: +DATA01/mydb/redolog_group3_member2
    Tue Jul 10 20:16:21 2012Could an increase on the redo log file sizes solve this problem? Unfortunately I do not have a test platform, where I could have tested this.

    mseberg wrote:
    It just a warning. I would read these two notes:
    I suspect the OP is more worried about the apparent two minute gap between the "cannot switch" and the "advanced" at     19:39:18 and 19:41:12 respectively. Did the switch really have to wait for that two minutes, or is this a spurious threat caused by an anomaly in the writing of the alert log.
    As a cross check, I would examine v$event_histogram (or an AWR/Statspack for the interval) for time spent in the event "log file switch (private strand flush incomplete)". In prinicple I wouldn't expect to see waits longer than a few (less than 10) milliseconds; if the histogram shows long waits consistent with the alert log reports than I'd contact Oracle because something odd is happening.
    (Footnote - it's possible that the event times out after a limited interval, log file sync used to time out after 1 second and has recently changed to something shorter and adjustable, so 2 seconds may (for example) appear as 20 x 0.1 seconds.)
    Regards
    Jonathan Lewis

  • Which parameter to change to avoid "cannot allocate new log"

    Hello Everyone.
    I'm running 9i r2 on windows 2003 SD edition server with ISCSI attached. on the ISCSI drive i have 1 group of redo logs together with DBF's in the data directory. The other 2 redo groups are on the 2 separate local disks together with archive logs (in another folder).
    I'm getting a few "cannot allocate new log" errors every couple of days and in the event viewer "Archive process error: ORACLE Instance prdps - Can not allocate log, archival required"
    I'm not sure which parameter i should change.
    Current setup:
    db_writer_processes     1
    dbwr_io_slaves          0
    Here is the output from v$sysstat:
    49     DBWR checkpoint buffers written     8     7410575
    50     DBWR transaction table writes     8     7748
    51     DBWR undo block writes     8     4600265
    52     DBWR revisited being-written buffer     8     5313
    53     DBWR make free requests     8     26383
    54     DBWR free buffers found     8     19838373
    55     DBWR lru scans     8     21831
    56     DBWR summed scan depth     8     21265425
    57     DBWR buffers scanned     8     21265425
    58     DBWR checkpoints     8     1719
    59     DBWR cross instance writes     40     0
    60     DBWR fusion writes     40     0
    This is from alert.log:
    Fri Mar 06 00:25:52 2009
    ARC0: Completed archiving log 1 thread 1 sequence 7004
    Fri Mar 06 00:25:54 2009
    Thread 1 advanced to log sequence 7006
    Current log# 3 seq# 7006 mem# 0: E:\ORACLE\ORADATA\PRDPS\REDO03A.LOG
    Current log# 3 seq# 7006 mem# 1: F:\ORACLE\ORADATA\PRDPS\REDO03B.LOG
    Current log# 3 seq# 7006 mem# 2: G:\ORACLE\ORADATA\PRDPS\REDO03C.LOG
    Fri Mar 06 00:25:54 2009
    ARC1: Evaluating archive log 2 thread 1 sequence 7005
    ARC1: Beginning to archive log 2 thread 1 sequence 7005
    Creating archive destination LOG_ARCHIVE_DEST_1: 'F:\ORACLE\ORADATA\PRDPS\ARCHIVE\PRDPS_001_07005.ARC'
    Fri Mar 06 00:26:03 2009
    Thread 1 advanced to log sequence 7007
    Current log# 1 seq# 7007 mem# 0: E:\ORACLE\ORADATA\PRDPS\REDO01A.LOG
    Current log# 1 seq# 7007 mem# 1: F:\ORACLE\ORADATA\PRDPS\REDO01B.LOG
    Current log# 1 seq# 7007 mem# 2: G:\ORACLE\ORADATA\PRDPS\REDO01C.LOG
    Fri Mar 06 00:26:03 2009
    ARC0: Evaluating archive log 2 thread 1 sequence 7005
    ARC0: Unable to archive log 2 thread 1 sequence 7005
    Log actively being archived by another process
    ARC0: Evaluating archive log 3 thread 1 sequence 7006
    ARC0: Beginning to archive log 3 thread 1 sequence 7006
    Creating archive destination LOG_ARCHIVE_DEST_1: 'F:\ORACLE\ORADATA\PRDPS\ARCHIVE\PRDPS_001_07006.ARC'
    Fri Mar 06 00:26:15 2009
    ARC1: Completed archiving log 2 thread 1 sequence 7005
    ARC1: Evaluating archive log 3 thread 1 sequence 7006
    ARC1: Unable to archive log 3 thread 1 sequence 7006
    Log actively being archived by another process
    Fri Mar 06 00:26:16 2009
    Thread 1 cannot allocate new log, sequence 7008
    All online logs needed archiving
    Current log# 1 seq# 7007 mem# 0: E:\ORACLE\ORADATA\PRDPS\REDO01A.LOG
    Current log# 1 seq# 7007 mem# 1: F:\ORACLE\ORADATA\PRDPS\REDO01B.LOG
    Current log# 1 seq# 7007 mem# 2: G:\ORACLE\ORADATA\PRDPS\REDO01C.LOG
    Thread 1 advanced to log sequence 7008
    Current log# 2 seq# 7008 mem# 0: E:\ORACLE\ORADATA\PRDPS\REDO02A.LOG
    Current log# 2 seq# 7008 mem# 1: F:\ORACLE\ORADATA\PRDPS\REDO02B.LOG
    Current log# 2 seq# 7008 mem# 2: G:\ORACLE\ORADATA\PRDPS\REDO02C.LOG
    Fri Mar 06 00:26:16 2009
    ARC1: Evaluating archive log 3 thread 1 sequence 7006
    ARC1: Unable to archive log 3 thread 1 sequence 7006
    Log actively being archived by another process
    ARC1: Evaluating archive log 1 thread 1 sequence 7007
    ARC1: Beginning to archive log 1 thread 1 sequence 7007
    Should i just change
    db_writer_processes     1
    dbwr_io_slaves          2
    Thank you
    Any help appreciated.

    This message indicates that Oracle wants to reuse a redo log file, but
    the
    corresponding checkpoint associated is not terminated. In this case,
    Oracle
    must wait until the checkpoint is completely realized. This situation
    may be encountered particularly when the transactional activity is
    important.
    check for:
    - Background checkpoint started.
    - Background checkpoint completed.
    These two statistics must not be different more than once. If this is
    not true, your database hangs on checkpoints. LGWR is unable to
    continue
    writing the next transactions until the checkpoints complete.
    Three reasons may explain this difference:
    - A frequency of checkpoints which is too high.
    - A checkpoints are starting but not completing
    - A DBWR which writes too slowly.
    The way to resolve incomplete checkpoints is through tuning
    checkpoints and
    logs:
    1) Give the checkpoint process more time to cycle through the logs
    - add more redo log groups
    - increase the size of the redo logs
    2) Reduce the frequency of checkpoints
    - increase LOG_CHECKPOINT_INTERVAL
    - increase size of online redo logs
    3) Improve the efficiency of checkpoints enabling the CKPT process
    with CHECKPOINT_PROCESS=TRUE
    4) Set LOG_CHECKPOINT_TIMEOUT = 0. This disables the checkpointing
    based on
    time interval.
    5) Another means of solving this error is for DBWR to quickly write
    the dirty
    buffers on disk. The parameter linked to this task is:
    DB_BLOCK_CHECKPOINT_BATCH.
    DB_BLOCK_CHECKPOINT_BATCH specifies the number of blocks which are
    dedicated
    inside the batch size for writing checkpoints. When you want to
    accelerate
    the checkpoints, it is necessary to increase this value.

  • Increased logfile size and now get cannot allocate new log

    Because we were archiving up to 6 and 7 times per minute, I increased our logfiles from size from 13M to 150M. I also increased the number of groups from 3 to 5.
    Because we want to ensure recoverability within a certain timeframe, I also have a script that runs every 15 minutes and issues 2 commands: ALTER SYSTEM SWITCH LOGFILE; and ALTER SYSTEM CHECKPOINT;
    I am now seeing in my alert.log file the following, almost every time we do a log switch.
    Thread 1 cannot allocate new log, sequence 12380
    Private strand flush not complete
    No other changes have been made to the database.
    Why would this now be doing this?
    Should I not be doing both the ALTER SYSTEM SWITCH LOGFILE and the ALTER SYSTEM CHECKPOINT?
    Is there something else I should be looking at?
    Any suggestions/answers would be greatly appreciated.
    Db version: 11.1.0.7
    OS: OEL 5.5

    Set the FAST_START_MTTR_TARGET parameter to the instance recovery time in seconds that you want...
    ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=upto 10; this will make sure that the redo logs are copied faster....
    The sizing redo log files can influence performance because DBWR, LGWR and ARCH are all working during high DML periods.
    A too small online redo log file size can cause slowdowns from excessive DBWR and checkpointing behavior. A high checkpointing frequency and the "log file switch (checkpoint incomplete) can also cause slowdowns.
    Add additional log writer processes (LGWR).
    Ensure that the archived redo log filesystem resides on a separate physical disk spindle.
    Put the archived redo log filesystem on super-fast solid-state disks.

Maybe you are looking for

  • How to upload a PDF file and convert it to OTF format

    We have come across rquirements like converting OTF to PDF but my requirement is to read a PDF file in SAP and convert it to OTF format for printing. Can anyone please help me with the Function Modules to do so.

  • Purchase order user exit / BADI / Enhancement points

    Hello  experts, Please Help me to get the user exit or any way to enhance the PO functionality. I am trying to set the PO line item status to "Blocked" or "Unblocked" whenever a change is triggered in PO line item for transactions Me21N / Me22N. The

  • Web forms w/ patch 1 tree & call_form crash

    Hi there: I've done my homework and haven't come across anyone else w/ the same problem, so let me see if I can start a grass roots movement here. I've got a tree item in a 6i patch 1 web form that acts as a forms launcher. If you double-click a part

  • Plastic of the hinges make noises

    My Macbook pro early 2011 15'' make a little noise when i open or close the dispalay. Is like a "plap" and it's at the level of the plastic of the hinges, and i think that is the plastic that make noises. Have you ever had this problem?If yes what di

  • Idoc mapping into E1IDB02 with different qualifiers

    Hi, I have to map a flat file to an Idoc (PEXR2002). Problem is in the flat file I have debitor and creditor informations wich have to be mapped into the same target segment named E1IDB02 with different qualifiers (BA for debitor and BB for creditor)