How do I manually archive 1 redo log at a time?

The database is configured in archive mode, but automatic archiving is turned off.
For both Oracle 901 and 920 on Windows, when I try to manually archive a single redo log, the database
archives as many logs as it can up to the log just before the current log:
For example:
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
6 rows selected.
SQL> alter system archive log next;
System altered.
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
See - instead of only 1 log being archive, 4 of them were. Oracle behaves the same way if I use the "sequence" option:
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 NO INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 NO INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 NO INACTIVE 425088 28-MAR-05
4 1 17 512000 1 NO INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
6 rows selected.
SQL> alter system archive log next;
System altered.
SQL> select * from v$log order by sequence#;
GROUP# THREAD# SEQUENCE# BYTES MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM
1 1 14 104857600 1 YES INACTIVE 424246 19-JAN-05
2 1 15 104857600 1 YES INACTIVE 425087 28-MAR-05
3 1 16 104857600 1 YES INACTIVE 425088 28-MAR-05
4 1 17 512000 1 YES INACTIVE 425092 28-MAR-05
5 1 18 512000 1 NO INACTIVE 425100 28-MAR-05
6 1 19 512000 1 NO CURRENT 425102 28-MAR-05
Is there some default system configuration property telling Oracle to archive as many logs as it can?
Thanks,
DGR

Thanks Yoann (and Syed Jaffar Jaffar Hussain too),
but I don't have a problem finding the group to archive or executing the alter system archive log command.
My problem is that Oracle doesn't work as I expect it.
This comes from the Oracle 9.2 online doc:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/statements_23a.htm#2053642
"Specify SEQUENCE to manually archive the online redo log file group identified by the log sequence number integer in the specified thread."
This implies that Oracle will only archive the log group identified by the log sequence number I specify in the alter system archive log sequence statement. However, Oracle is archiving almost all of the log groups (see my first post for an example).
This appears to be a bug, unless there is some other system parameter that is configured (by default) to allow Oracle to archive as many log groups as possible.
As to the reason why - it is an application requirement. The Oracle db must be in archive mode, automatic archiving must be disabled and the application must control online redo log archiving.
DGR

Similar Messages

  • How could I find specified date of REDO log and archive REDO log ?

    we use Oracle11gr2 on win2008R2.
    1
    How could I find specified date of REDO log(2013/10/17,etc) and archive REDO log ?
    2
    What is the format of archive REDO log.? (zipped file ?)

    user12075536123 wrote:
    1)
    select * from v$archived_log;
    select * from v$log_history;
    but there is a possibility there is no old data
    below contains no filename column
    SQL> desc v$log_history
    Name                                      Null?    Type
    RECID                                              NUMBER
    STAMP                                              NUMBER
    THREAD#                                            NUMBER
    SEQUENCE#                                          NUMBER
    FIRST_CHANGE#                                      NUMBER
    FIRST_TIME                                         DATE
    NEXT_CHANGE#                                       NUMBER
    RESETLOGS_CHANGE#                                  NUMBER
    RESETLOGS_TIME                                     DATE
    there is NO data when archive mode is disabled

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Delete old and unused Archived Redo Log Files

    Hello forum!
    My db was in ARCHIVELOG mode and It created 9GB of archived redo log files.
    Now I put the db in NOARCHIVELOG mode, can I delete all the ARLF? am I sure that the DB never need those files in the future?
    If yes, how can I delete them? I will use 'del' operative system command?
    In addition, I found this command:
    SQL>alter database open resetlogs;
    is it useful for my purpose?
    thank you!

    You are safe to remove those archivelog files if you altered your database to no archive log mode. Just remove them from OS level
    Please bear in mind that database in no archivelog mode will lost data in the event of disaster.
    SQL>alter database open resetlogs;Doesn't help in your situation, it's use to bring up database after incomplete recovery.

  • Select from .. as of - using archived redo logs - 10g

    Hi,
    I was under the impression I could issue a "Select from .. as of" statement back in time if I have the archived redo logs.
    I've been searching for a while and cant find an answer.
    My undo_management=AUTO, database is 10.2.0.1, the retention is the default of 900 seconds as I've never changed it.
    I want to query a table as of 24 hours ago so I have all the archived redo logs from the last 48 hours in the correct directory
    When is issue the following query
    select * from supplier_codes AS OF TIMESTAMP
    TO_TIMESTAMP('2009-08-11 10:01:00', 'YYYY-MM-DD HH24:MI:SS')
    I get a snapshot to old ORA-01555 error. I guess that is because my retention is only 900 seconds but I thought the database should query the archive redo logs or have I got that totally wrong?!
    My undo tablespace is set to AUTOEXTEND ON and MAXSIZE UNLIMITED so there should be no space issues
    Any help would be greatly appreciated!
    Thanks
    Robert

    If you want to go back 24 hours, you need to undo the changes...
    See e.g. the app dev guide - fundamentals, chapter on Flashback features: [doc search|http://www.oracle.com/pls/db102/ranked?word=flashback&remark=federated_search].

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

  • Where RFS exactly write redo data ?  ( archived redo log or standby redo log ) ?

    Good Morning to all ;
    I am getting bit confused from oracle official link . REF_LINK : Log Apply Services
    Redo data transmitted from the primary database is received by the RFS on the standby system ,
    where the RFS process writes the redo data to either archived redo log files  or  standby redo log files.
    In standby site , does rfs write redo data in any one file or both ?
    Thanks in advance ..

    Hi GTS,
    GTS (DBA) wrote:
    Primary & standby log file size should be same - this is okay.
    1) what are trying to disclose about  largest & smallest here ? -  You are confusing.
    Read: http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_transport.htm#SBYDB4752
    "Each standby redo log file must be at least as large as the largest redo log file in the redo log of the redo source database. For administrative ease, Oracle recommends that all redo log files in the redo log at the redo source database and the standby redo log at a redo transport destination be of the same size."
    GTS (DBA) wrote:
    2) what abt group members ? should be same as primary or need  to add some members additionally. ?
    Data Guard best practice for performance, is to create one member per each group in standby DB. on standby DB, one member per group is reasonable enough. why? to avoid write penalty; writing to more than one log files at the standby DB.
    SCENARIO 1: if in your source primary DB you have 2 log member per group, in standby DB you can have 1 member  per group, additionally create an extra group.
    primary
    standby
    Member per group
    2
    1
    Number of log group
    4
    5
    SCENARIO 2: you can also have this scenario 2 but i will not encourage it
    primary
    standby
    Member per group
    2
    2
    Number of log group
    4
    5
    GTS (DBA) wrote:
    All standby redo logs of the correct size have not yet been archived.
      - at this situation , can we force on standby site ? any possibilities ? 
    you can not force it , just size your standby redo files correctly and make sure you don not have network failure that will cause redo gap.
    hope there is clarity now
    Tobi

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • How a select statement generates redo logs

    Can some one explain how a select statement generates redo logs
    Naveen

    Redo with a select statement happens when dirty blocks get written to the database, and are then "cleaned up" when next read of that data block. This could happen when a large DML statement does not commit before the DB Writer needs to write modified blocks to disk. The next time the blocks are read by a select statement they get modified, hence REDO is generated by the select statement.
    Read the following for more and better understandings.
    http://www.dbspecialists.com/specialists/specialist2003-10.htm
    Jaffar

  • ASM Migrate Archive Redo Logs

    Hi all,
    Is there any way to migrate existing archived redo logs to the ASM diskgroup?
    Regards,
    Vijayaraghavan K

    Thanks..
    I have copied the spfile from the last backup to the asmdiskgroup by using below command
    RESTORE SPFILE TO '+DATA/spfilesid.ora';
    When i try to start the database it is picking the spfile from file system alone. It is possible to rmap the spfile to the diskgroup?
    Regards,
    Vijayarghavan K

  • High redo log space wait time

    Hello,
    Our DB is having very high redo log space wait time :
    redo log space requests 867527
    redo log space wait time 67752674
    LOG_BUFFER is 14 MB and having 6 redo logs groups and the size of redo log file is 500MB for each log file.
    Also, the amount of redo generated per hour :
    START_DATE START NUM_LOGS MBYTES DBNAME
    2008-07-03 10:00 2 1000 TKL
    2008-07-03 11:00 4 2000 TKL
    2008-07-03 12:00 3 1500 TKL
    Does increasing the size of LOG_BUFFER will help to reduce the redo log space wait ?
    Thanks in advance ,
    Regards,
    Aman

    Looking quickly over the AWR report provided the following information could be helpful:
    1. You are currently targeting approx. 6GB of memory with this single instance and the report tells that physical memory is 8GB. According to the advisories it looks like you could decrease your memory allocation without tampering your performance.
    In particular the large_pool_size setting seems to be quite high although you're using shared servers.
    Since you're using 10.2.0.4 it might be worth to think about using the single SGA_TARGET parameter instead of the specifying all the single parameters. This allows Oracle to size the shared pool components within the given target dynamically.
    2. You are currently using a couple of underscore parameters. In particular the "_optimizer_max_permutations" parameter is set to 200 which might reduce significantly the number of execution plans permutations Oracle is investigating while optimizing the statement and could lead to suboptimal plans. It could be worth to check why this has been set.
    In addition you are using a non-default setting of "_shared_pool_reserved_pct" which might no longer be necessary if you are using the SGA_TARGET parameter as mentioned above.
    3. You are using non-default settings for the "optimizer_index_caching" and "optimizer_index_cost_adj" parameters which favor index-access paths / nested loops. Since the "db file sequntial read" is the top wait event it might be worth to check if the database is doing too excessive index access. Also most of the rows have been fetched by rowid (table fetch by rowid) which could also be an indicator for excessive index access/nested loop usage.
    4. You database has been working quite a lot during the 30min. snapshot interval: It processed 123.000.000 logical blocks, which means almost 0.5GB per second. Check the top SQLs, there are a few that are responsible for most of the blocks processed. E.g. there is a anonymous PL/SQL block that has been executed almost 17.000 times during the interval representing 75% of the blocks processed. The statements executed as part of these procedures might be worth to check if they could be tuned to require less logical I/Os. This could be related to the non-default optimizer parameters mentioned above.
    5. You are still using the compatible = 9.2.0 setting which means this database could still be opened by a 9i instance. If this is no longer required, you might lift this to the default value of 10g. This will also convert the REDO format to 10g I think which could lead to less amount of redo generated. But be aware of the fact that this is a one-way operation, you can only go back to 9i then via a restore once the compatible has been set to 10.x.
    6. Your undo retention is set quite high (> 6000 secs), although your longest query in the AWR period was 151 seconds. It might be worth to check if this setting is reasonable, as you might have quite a large undo tablespace at present. Oracle 10g ignores the setting if it isn't able to honor the setting given the current Undo tablespace size.
    7. "parallel_max_servers" has been set to 0, so no parallel operations can take place. This might be intentional but it's something to keep in mind.
    Regards,
    Randolf
    Oracle related stuff:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How to recover from corrupt redo log file in non-archived 10g db

    Hello Friends,
    I don't know much about recovering databases. I have a 10.2.0.2 database with corrupt redo file and I am getting following error on startup. (db is non archived and no backup) Thanks very much for any help.
    Database mounted.
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 6464 change 9979452011066 time 06/27/2009
    15:46:47
    ORA-00312: online log 1 thread 1: '/dbfiles/data_files/log3.dbf'
    ====
    SQL> select Group#,members,status from v$log;
    GROUP# MEMBERS STATUS
    1 1 CURRENT
    3 1 UNUSED
    2 1 INACTIVE
    ==
    I have tried this so far but no luck
    I have tried following commands but no help.
    SQL> ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
    Database altered.
    SQL> alter database open resetlogs;
    alter database open resetlogs
    ERROR at line 1:
    ORA-01139: RESETLOGS option only valid after an incomplete database recovery
    SQL> alter database open;
    alter database open
    ERROR at line 1:
    ORA-00368: checksum error in redo log block
    ORA-00353: log corruption near block 6464 change 9979452011066 time 06/27/2009
    15:46:47
    ORA-00312: online log 1 thread 1: '/dbfiles/data_files/log3.dbf'

    user652965 wrote:
    Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
    Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
    And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving.

  • Is it possible to use Archive Redo log file

    Hi friends,
    My database is runing in archive log mode.I had taken cold backup on Sunday but i am taking archive log file daily evening.
    Wednesday my database crash that means i lost all control file,redo log file,datafile etc.
    I have archived log file backup till Tuesday Night and other files like control file,datafile etc of Sunday .
    1)Is it possible to recover database till tuesday if yes HOW to use archive log file.
    (See SCN no of control file and datafiles is same ,if we use RECOVER DATABASE command oracle shows that media recovery is not requide.)
    we don't have current control file we had lost in media crash.
    null

    Dear friend
    In this scenario
    you lost the contorl file
    1>If you have old copy of Contorl file,
    which has the current structure of the
    database and all the archive files then
    you can recover the database with
    Ponint in time recovery (Using Backup Controlfile)
    suresh

  • How to disable write to redo log file in oracle7.3.4

    in oracle 8, alter table no logged in redo log file like: alter table tablename nologging;
    how to do this in oracle 7.3.4?
    thanks.

    user652965 wrote:
    Thanks very much for your help guys. I appreciate it. unfortunately none of these commands worked for me. I kept getting error on clearing logs that redo log is needed to perform recovery so it can't be cleared. So I ended up restoring from earlier snapshot of my db volume. Database is now open.
    Thanks again for your input.And now, as a follow-up, at a minimum you should make sure that all redo log groups have at least 3 members. Then, if you lose a single redo log file, all you have to do is shutdown the db and copy one of the good members (of the same group as the lost member) over the lost member.
    And as an additional follow-up, if you value your data you will run in archivelog mode and take regular backups of the database and archivelogs. If you fail to do this you are saying that your data is not worth saving.

  • Archived redo log size more less than online redo logs

    Hi,
    My database size around 27 GB and redo logs sizes are 50M. But archive log size 11M or 13M, and logs are switching every 5-10min. Why?
    Regards
    Azer Imamaliyev

    Azer_OCA11g wrote:
    1) Almost all archive logs sizes are 11 or 13M.. But sometimes 30, 37M.
    2)
    select to_char(completion_time, 'dd.mm.yyyy HH24:MI:SS')
    from v$archived_log
    order by completion_time desc;
    10.02.2012 11:00:26
    10.02.2012 10:50:23
    10.02.2012 10:40:05
    10.02.2012 10:29:34
    10.02.2012 10:28:26
    10.02.2012 10:18:07
    10.02.2012 10:05:04
    10.02.2012 09:55:03
    10.02.2012 09:40:54
    10.02.2012 09:28:06
    10.02.2012 09:13:44
    10.02.2012 09:00:17
    10.02.2012 08:45:04
    10.02.2012 08:25:04
    10.02.2012 08:07:12
    10.02.2012 07:50:06
    10.02.2012 07:25:05
    10.02.2012 07:04:50
    10.02.2012 06:45:04
    10.02.2012 06:20:04
    10.02.2012 06:00:12
    3) There arent any serious change at DB level.. almost these messages show in alert log since creating DB..Two simple thoughts:
    1) Are you running with archive log compression - add the "compressed" column to the query above to see if the archived log files are compressed
    2) The difference may simply be a reflection of the number and sizes of the public and private redo threads you have enabled - when anticipating a log file switch Oracle leaves enough space to cater for threads that need to be flushed into the log file, and then doesn't necessarily have to use it.
    Here's a query (if you can run as SYS) to show you your allocation of public and private threads
    select
         PTR_KCRF_PVT_STRAND           ,
         FIRST_BUF_KCRFA               ,
         LAST_BUF_KCRFA                ,
         TOTAL_BUFS_KCRFA              ,
         STRAND_SIZE_KCRFA             ,
         indx
    from
         x$kcrfstrand
    ;Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

Maybe you are looking for

  • IWeb, blogging, Facebook and Twitter

    Over the past few months, I have been getting my feet wet with iWeb 2009. To date, I have never used iWeb's blogging feature. (I post all my web pages to a GoDaddy account.) This raises some questions: 1: Do iWeb's own blogging features work with non

  • Problem in opening layout editor in adobe forms

    Hi, There is a problem in opening the layout editor in adobe forms(T.code SFP). Eventhough I  have installed Adobe livecycle designer 8.0 and adobe acrobat reader 9.0. It is showing the error as " The forms design tool for developing the form layout

  • How to create two structures in query designer?

    Hi experts, I got the requirement to create 2 structures in query designer , as far as i concern i got to know that there will be chance to create on one for char and other for key figure, is their any chance to create 2 structures on key figures. If

  • Change field attributes of fields created using EEWB

    Dear SAP gurus, we have extended our complaint transactions by adding some 'customer' fields using EEWB. Now, the program name for this extension is: SAPLZCRM_BTX_EEW_UI02 and the screen number where those fields are is "0100". The screenfield is for

  • BSP+Guided Procdure

    Hi All, I am implementing MDM work flow using Guided Procedure and Business Server Page (BSP).   Created Collable objects using BSP application.Created one BSP application in that having two HTM pages and To complete the Action  calling complete.htm