Logical Standby table not supported 11.2.0.1
I have a table with CLOB in primary and runs the SQL to determine which tables are unsupported.
They, nor I, can see why this table is unsupported. Please let me know what I am missing.
Are there additional steps to see why?
Please also see the DDL for the table at the end.
SQL> SELECT COLUMN_NAME,DATA_TYPE FROM DBA_LOGSTDBY_UNSUPPORTED WHERE OWNER='P_RPMX_AUDIT_DB' AND TABLE_NAME = 'AUDIT_TAB' ;
COLUMN_NAME DATA_TYPE
TABLE_SCRIPT CLOB
SQL> SELECT OWNER FROM DBA_LOGSTDBY_SKIP WHERE STATEMENT_OPT = 'INTERNAL SCHEMA';
OWNER
DBSNMP
SYS
SYSTEM
WMSYS
ORDDATA
OUTLN
DIP
EXFSYS
XDB
ORDPLUGINS
ANONYMOUS
APPQOSSYS
ORDSYS
SI_INFORMTN_SCHEMA
XS$NULL
15 rows selected.
SQL> SELECT DISTINCT OWNER,TABLE_NAME FROM DBA_LOGSTDBY_UNSUPPORTED ORDER BY OWNER,TABLE_NAME;
OWNER TABLE_NAME
APG APG_REJECT_TAB
APG ERR$_ACCOUNT
APG ERR$_ACCOUNT_ADDRESS
APG ERR$_ACCOUNT_ATTRIBUTES
APG ERR$_ACCOUNT_CYCLE_HISTORY
APG ERR$_ACCOUNT_USERS
APG ERR$_ACCOUNT_XREFERENCE
APG ERR$_ACTIONABLE_ITEMS
APG ERR$_IDENTIFICATION
APG ERR$_LOAD_RECON_METRICS
APG ERR$_PHONE
APG ERR$_POOL
APG ERR$_POOL_ACCOUNTS
APG ERR$_POOL_ACCOUNT_ENROLL_HISTO
APG ERR$_USERS
P_IM_EXTRACT_MASK ERR$_REWARDS_ACTIVITY
P_IM_EXTRACT_MASK ERR$_REWARDS_TRANSACTION
P_IM_EXTRACT_MASK ERR$_REWARDS_TRANSACTION_ITEM
P_IM_EXTRACT_MASK ERR$_TRANSACTION
P_RPMX_AUDIT_DB AUDIT_TAB
P_RPMX_JOBS ERR$_ACCOUNT
P_RPMX_JOBS ERR$_ACCOUNT_XREFERENCE
22 rows selected.
CREATE TABLE P_RPMX_AUDIT_DB.AUDIT_TAB
TABLE_SCRIPT CLOB
LOB (TABLE_SCRIPT) STORE AS (
TABLESPACE P_RPMX_AUD_DB_DATA
ENABLE STORAGE IN ROW
CHUNK 8192
RETENTION
NOCACHE
LOGGING
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
TABLESPACE P_RPMX_AUD_DB_DATA
RESULT_CACHE (MODE DEFAULT)
PCTUSED 0
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 1M
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
FLASH_CACHE DEFAULT
CELL_FLASH_CACHE DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
Thanks, Chris
Hi I was able to find the following answer. Thanks
"Logical standby has never supported tables that only contain LOBs. We require some scalar column that can be used for row identification during update and delete processing.
There is some discussion of row identification issues in section 4.1.2 of Oracle Data Guard Concepts and Administration. "
http://docs.oracle.com/cd/E11882_01/server.112/e25608/create_ls.htm#i77026
"If there is no primary key and no nonnull unique constraint/index, then all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. All columns are logged except the following: LONG, LOB, LONG RAW, object type, and collections."
So you see with only a LOB column it cannot identify the row. You need a primary key or at least some columns that can uniquely identify the row. The one LOB column is no logged with the supplemental logging.
So it is not supported.
Similar Messages
-
Mat. for Initial Download: Table not supported by function "C_"
Hi All,
We are in SRM 5.0 & ECC 6.0 with Classic scenario. While uploading the MATERIAL through R3AS, we have the "Mat. for Initial Download: Table not supported by function C_" & "No product ID determined for material number ... of logical system" error in SMW01.
In SMQ2 in SRM, Queue is blocked with "Error in Mapping (Details: transaction SMW01)"
For your information RFC Users at both the ends have SAP_ALL authorisation. SAP Notes 720819, 420980 & 432339 are implemented.
Please guide me to resolve this issue.
Regards
Ashutoshhi,
How was the issue resolved , i am having the same problem.
can somebody provide with clue
regards -
Circular logical schemas are not supported
Hi,
I have created a repository in the OBIEE using the data model diagram supplied to me. I have dragged the entire physical layer model to form my BMM Layer Model.
I am getting the below error when I create my RPD. "Multiple paths exist to table A. Circular logical schemas are not supported."
Please note that there are no circular connections and I have double checked this.
When I try to delete the BMM Layer and keep only the physical layer,I do not get any consistency errors.
Someone please help.
Thanks,
AkshathaHi,
According to the docs-
Cause. The repository contains a circular logical scheme, which is not supported.
Response. Correct the repository so it does not contain multiple paths to the named table, and retry your actions.
Make in BMM star schema properly (dimensions, facts) with complex joins.
Check if you are having a star schema followed in BMM with proper fact tables and dimension tables and also 1:n joins.
Hope this helped/ answered.
Regards
MuRam -
Mat. for Initial Download: Table not supported by function
Dear All,
While replicating material from R3AS, I am getting error in SMW01 as below:
"Mat. for Initial Download: Table not supported by function"
Kindly suggest.
Regards,
SagarHi,
if anybody is interested:I solved the problem for our systems, but it's not a very nice solution:
In backendsystem, which is ECC 6.0 for us, in table CRMRFCPAR the field CRM_REL was empty for connection to CRM. I think this field should be filled automatically but it wasn't. I filled it 'hard' with '700' and then product/material data could be replicated.
Background:
In ECC system while collecting material data the subroutine GET_DOWNLOAD_PARAMS_AND_QUEUE of FG CRM0 is called and there is a request 'IF gt_rfcdest-crm_rel GE '500'.'
This field crm_rel is filled from table CRMRFCPAR.
I think CRMRFCPAR-CRM_REL should be filled automatically when using SM30 using the RFC-Destination but in our system it was empty. (CRMPAROLTP -> CRM_RELEASE is set !). So I set CRMRFCPAR-CRM_REL 'hard' in SE11 to '700'.
This is not a very nice solution but maybe it will help anyone.
I'm also interested if anybody has another better solution.
Regards
Christoph -
Hi,
I'm not sure if anyone is using the logical standby database because it appears to me to be complete rubbish. The apply of redo data just does not appear to work. for example I have this table:
create table test29(
BP_ID NUMBER(10) NOT NULL,
BP_K_ID NUMBER(10) NOT NULL,
BP_TYP CHAR(2) NOT NULL,
BP_U_ID NUMBER(10),
BP_ACTIVE DATE NOT NULL,
BP_USED NUMBER(6) NOT NULL,
BP_STATUS CHAR(1) NOT NULL,
BP_LASTUPDATE DATE NOT NULL
and insert this data:
insert into test29
select 10643,291904,'O ',1,TO_DATE('01/05/2003 00:44:18', 'MM/DD/YYYY HH24:MI:SS'),0,'A',TO_DATE('01/06/2004 20:13:08', 'MM/DD/YYYY HH24:MI:SS') from dual
When this information is transfered to the logical standby and it attempts to read it i get a ORA-26689: column datatype mismatch in LCR. But wait! If I change the name of column 6 (BP_PUNKTE) to say (BP_AAAAA) and try again it works!!! The logical standby fails on many tables just because of the name (updates, inserts anything)...did anyone at oracle test this? Is anyone using the logical standby have any ideas how and if I could ever get this working.
Using Oracle 9.2.0.4. on Redhat 2.1.
Thanks!
Steve.unfortunately not. the oracle licence was purchased before i arrived and when I attempted to get a support contact they want the money since we purchased the licence 3 years ago. 3 years of cash for free! why should I pay for their bugs anyway, when we buy this expensive software it should work. when I buy a new car from ford and radio does not work I don't have to pay ford to fix it. rip-off!
-
Logical Standby Database Not Getting Sync With Primary Database
Hi All,
I am using a Primary DB and Logical Standby DB configuration in Oracle 10g:-
Version Name:-
SQL> select * from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi
PL/SQL Release 10.2.0.5.0 - Production
CORE 10.2.0.5.0 Production
TNS for Solaris: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production
We have build the logical standby last week and till date the Logical DB is not sync. I have checked the init parameters and I wont see any problems with it. Also archive log destinations are also fine enough.
We have a important table named "HPD_HELPDESK" where record count is growing gradually whereas in logical standby it's not growing. There are some 19K record difference in the both the tables.
I have checked the alert log but it is also not giving any error message. Please find the last few lines of the alert log in logical Database:-
RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1703_790996778.arc] to LogMiner session id [1]
Tue Aug 28 14:56:52 GMT 2012
RFS[2853]: Successfully opened standby log 5: '/oracle_data/oradata/remedy/stbyredo01.log'
Tue Aug 28 14:56:58 GMT 2012
RFS LogMiner: Client enabled and ready for notification
Tue Aug 28 14:57:00 GMT 2012
RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1704_790996778.arc] to LogMiner session id [1]
Tue Aug 28 15:06:40 GMT 2012
RFS[2854]: Successfully opened standby log 5: '/oracle_data/oradata/remedy/stbyredo01.log'
Tue Aug 28 15:06:47 GMT 2012
RFS LogMiner: Client enabled and ready for notification
Tue Aug 28 15:06:49 GMT 2012
RFS LogMiner: Registered logfile [oradata_san1/oradata/remedy/arch/ars1_1705_790996778.arc] to LogMiner session id [1]
I am not able to trace the issue that why the records are not growing in logical DB. Please provide your inputs.
Regards,
ArijitHow do you know that there's such a gap between the tables?
If your standby db is a physical standby, then it is not open and you can't query your table without cancelling the recovery of the managed standby database.
What does it say if you execute this sql?
SELECT PROCESS, STATUS, THREAD#, SEQUENCE#, BLOCK#, BLOCKS FROM V$MANAGED_STANDBY;The ARCH processes should be connected and MRP waiting for a file.
If you query for the archive_gaps, do you get any hits?
select * from gv$archive_gapIf you're not working in a RAC environment you need to query v$archive_gap, instead!
Did you check whether the archives generated from the primary instance are transferred and present in the file system of your standby database?
I believe your standby is not in recovery_mode anymore or has an archive_gap, which is the reason why it doesn't catch up anymore.
Hope it helps a little,
Regards,
Sebastian
PS: I'm working on 11g, so unfortunately I'm not quite sure if the views are exist in 10gR2. It's worth a try though!
Edited by: skahlert on 31.08.2012 13:46 -
10gR2 Logical Standby database not applying logs
No errors are appearing in the logs and I've started the apply process :ALTER DATABASE START LOGICAL STANDBY APPLY but when I query dba_logstdby_log, none of the logs for the last 4 days shows as applied and the first SCN is still listed as current. Any thoughts on where I should start looking?
the latest event in DBA_LOGSTDBY_EVENTS is the startup of the log mining and apply.
I do not have standby redo logs so I cannot do real time apply, though I am looking to implementing this. Obviously, this is pretty new to me.Sorry I didn't mention this before, the logs are being transferred, I verified their location on the os and it matches the location in the dba_logstdby_log view.
-
Why certain tables not supported in DBAdapter?
While creating a Database Adapter PartnerLink and selecting a list of required tables, I get the following error:
bq. *The following tables are not supported in the Database Adapter and were not imported {list of tables}*
What kind of tables are not supported? The tables that are so far not included are many-to-many link tables. Some of these tables have column values that I need. Must I use a custom SQL statement to acquire the information in them?<b>a.</b> System acknowledgments used by the runtime environment to confirm that an asynchronous message has reached the receiver.
Application acknowledgments used to confirm that the asynchronous message has been successfully processed at the receiver.
http://help.sap.com/saphelp_nw2004s/helpdata/en/f4/8620c6b58c422c960c53f3ed71b432/content.htm
<b>c.</b> No
Message was edited by:
Prabhu S -
I just upgraded to OS x and now Logic 9 is not supported...
Logic 9 is not working with the new OS X (10.9). Is there an upgrade for logic or does everyone have to buy the new one?
Logic 9.1.8 will work OK under OS X 10.9.
Make sure your Logic App is in your Applications folder and you haven't re-named it in any way. Open your app store and 9.1.8 will show up as an update. Do the update and you'll be good to go. -
Table Modifications in Logical Standby
Can I add a new column to a table in a logical standby for a table which is actively synchronized from the primary? In essence, I would like to add a column to the table in the logical standby, but not add the column to the corresponding table in the primary DB.
OK;
So I have convert a Physical Standby to a Logical database.
This table has been created:
CREATE TABLE RSTARS
ID NUMBER,
FIRST_NAME VARCHAR2(50 BYTE),
LAST_NAME VARCHAR2(50 BYTE)
TABLESPACE OTN_TEST
PCTUSED 40
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
ALTER TABLE BIGSHOW.RSTARS ADD (
CONSTRAINT RSTARS_PK
PRIMARY KEY
(ID));
This data exist in the table:
set linesize 200
select * from rstars;
ID FIRST_NAME LAST_NAME
1 Robert Plant
2 Jimmy Page
2 rows selected.
On the Primary add another row of data
INSERT INTO RSTARS
ID,
FIRST_NAME,
LAST_NAME
VALUES
3,
'Ted',
'Nugent'
commit;
Connect to the logical Standby and check for it.
set linesize 200
select * from rstars;
ID FIRST_NAME LAST_NAME
1 Robert Plant
2 Jimmy Page
3 Ted Nugent
3 rows selected.
MORE TO COME
So I will add the columns one by one and test after each
Add varchar2 to the table:
SQL> alter database stop logical standby apply;
Database altered.
SQL> alter session disable guard;
Session altered.
SQL> ALTER TABLE BIGSHOW.RSTARS ADD (CITY VARCHAR2(15));
Table altered.
SQL> alter session enable guard;
Session altered.
SQL> alter database start logical standby apply;
Database altered.
SQL>
INSERT INTO RSTARS
ID,
FIRST_NAME,
LAST_NAME
VALUES
4,
'Geddy',
'Lee'
commit;
LOGSTDBY Apply process AS02 server id=2 pid=38 OS id=31938 stopped
Errors in file /u01/app/oracle/diag/rdbms/stest/STEST/trace/STEST_as01_31936.trc:
ORA-26676: Table 'BIGSHOW.RSTARS' has 3 columns in the LCR and 4 columns in the replicated site
Fri Aug 09 13:34:18 2013
LOGMINER: session#=1 (Logical_Standby$), builder MS01 pid=34 OS id=31930 sid=34 stopped
So you have good reason to be concerned. Thanks for the question. I have it in my notes, so I would have bet and lost!
I thinking you can create a child table as a work around.
You were right about this too:
SQL> select * from DBA_LOGSTDBY_UNSUPPORTED;
no rows selected
ORA-26676 returns zero documents when I search Oracle Support today.
Best Regards
mseberg
How to fix
To correct all I did was this and Oracle added the row:
SQL> alter database stop logical standby apply;
Database altered.
SQL> alter session disable guard;
Session altered.
SQL> ALTER TABLE BIGSHOW.RSTARS DROP COLUMN CITY;
Table altered.
SQL> alter session enable guard;
Session altered.
SQL> alter database start logical standby apply;
Database altered.
SQL>
set linesize 200
select * from rstars;
ID FIRST_NAME LAST_NAME
1 Robert Plant
2 Jimmy Page
3 Ted Nugent
4 Geddy Lee
4 rows selected.
Message was edited by: mseberg ( as you can tell I'm from the 70's ) -
Create function as "/ as sysdba" not shipped/applied to logical standby
As the subject states, I was creating a password verify function and profile that used the function as sys "/ as sysdba" .
The function was not created on the logical standby and thus the profile create statement failed. The function was not even listed in the standby's alertlog, although the create profile statement was.
I created the function manually on the standby by connecting / as sysdba and creating the function and then restarted logical standby apply. The standby then created the profile and continued without error.
Is this a bug?
Should the SYS 'create function' be shipped and applied or not?
When the function create statement was run on the primary as a non-sys user with DBA privs, it created ok and was shipped and appeared on logical standby ok.
Any ideas DataGuard Gurus?
(9.2.0.7 on Solaris 8)I logged a Tar. Oracle says:
"Ideally any object created on sys schema should get skipped automatically. sys is consider as internal schema and objects created in sys schema should get
skipped"
"But because of the internal Bug.3576307 "LOGICAL STANDBY IS NOT SKIPPING DDL IT SHOULD." it is not skipping the DDL executed on sys schema. This bug
is fixed in 10.2.
You can issue following statement to enable the DDL skip on sys schema.
SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
SQL> EXECUTE DBMS_LOGSTDBY.SKIP_ERROR(SCHEMA_DDL, SYS, null, null)
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;
After which all DDL errors encountered on any object in the SYS schema would be ignored and processing continued. This can render the TEST schema objects unusable and, if necessary, you can later recreate the tables
using the INSTANTIATE_TABLE procedure."
Makes sense now. -
Logical Standby Database and XMLDB
I couldn't find a proper group to post this message and thought I would try here.
I want to set up a Logical Standby Database for our production database server 9.2.0.4.0 with XMLDB. I am having problem with some system tables that were created when I registered some XML schemas.
These tables are in the standby database but Oracle complains the object does not exist.
Any idea?>
The "may be" is because I have tested flashback of a physical standby to before resetlogs, but not a logical standby.
>
A physical standby keeps the DBID of the primary - a logical standby does not. That is exactly the problem that restricts the reconversion into physical from logical, and you did not encounter that problem.
>
I haven't used "keep identity" but from what I read it relates to "convert to physical" not "flashback database".
>
Exactly. And that is what the OP wants to do: convert to physical (from logical).
You mentioned that this might be possible with flashback.
Problem: During the conversion from physical to logical, the DBID gets changed unless you specify (in 11g) KEEP IDENTITY. This would make it possible to reconvert into phyiscal from logical.
In short: If there is no solution for the changed DBID of the logical standby in order to flashback it into physical as you suggested, then it is not possible .
When I saw your first answer, I thought that you might have a solution in mind in order to solve that obvious problem. Sorry for having bothered you.
Kind regards
Uwe
http://uhesse.wordpress.com -
Hi,
When we say "Logical Standby Databases are logically identical to primary databases although the physical organization and structure of the data can be different." what does it exactly means?
Does it mean that in logical standby tablespace name, schema name, table name, column names etc can be different and still has the same data as primary?
Does it mean that we can exclude indexes and constraints as present in primary?
Only the data should match with primary word by word, value by value?
I am asking this as i have never worked in a logical standby database but i seriously want to know.
Please answer.
Regards,
SIDPhysical standby differs from logical standby:
Physical standby schema matches exactly the source database.
Archived redo logs and FTP'ed directly to the standby database which is always running in "recover" mode. Upon arrival, the archived redo logs are applied directly to the standby database.
Logical standby is different from physical standby:
Logical standby database does not have to match the schema structure of the source database.
Logical standby uses LogMiner techniques to transform the archived redo logs into native DML statements (insert, update, delete). This DML is transported and applied to the standby database.
Logical standby tables can be open for SQL queries (read only), and all other standby tables can be open for updates.
Logical standby database can have additional materialized views and indexes added for faster performance.
Installing Physical standbys offers these benefits:
An identical physical copy of the primary database
Disaster recovery and high availability
High Data protection
Reduction in primary database workload
Performance Faster
Installing Logical standbys offer:
Simultaneous use for reporting, summations and queries
Efficient use of standby hardware resources
Reduction in primary database workload
Some limitations on the use of certain datatypes -
Interesting issue with Logical Standby and database triggers
We have a Logical Standby that each month we export (expdp) a schema (CSPAN) that is being maintained by SQL Apply and import (impdp)it to a 'frozen copy' (eg CSPAN201104) using REMAP_SCHEMA.
This works fine although we've noticed that because triggers on the original schema being exported have the original schema (CSPAN) hard-referenced in the definition are imported into and owned by the new 'frozen' schema but are still 'attached' to the original schema's tables.
This is currently causing the issue where the frozen schema trigger is INVALID and causing the SQL Apply to fail. This is the error:
'CSPAN201104.AUD_R_TRG_PEOPLE' is
invalid and failed re-validation
Failed SQL update "CSPAN"."PEOPLE" set "ORG_ID" = 2, "ACTIVE_IND" = 'Y', "CREATE_DT" = TO_DATE('22-JUL-08','DD-MON-RR'),"CREATOR_NM" = 'LC', "FIRST_NM" = 'Test', "LAST_PERSON" ='log'...
Note: this trigger references the CSPAN schema (...AFTER INSERT ON CSPAN.PEOPLE...)
I suspect that triggers on a SQL Apply Maintained schema in a Logical Standby do not need to be valid (since they do not fire) but what if they reference a SQL Apply schema but are 'owned' by a non-SQL Apply schema? This trigger references a SQL Apply table so it should not fire
This is 10gR2 (10.2.0.4) on 64 bit Windows.
Regards
Graeme KingOK, I've finally got around to actually test this and it looks like you are not quite correct Larry in this statement...
'Since this trigger belongs to a new schema that is not controlled by SQL Apply (CSPAN201105) it will fire. But the trigger references a schema that is controlled by SQL Apply (CSPAN) so it will fail because it has to be validated.'
My testing concludes that even though the trigger belongs to a schema CSPAN201105 (not controlled by SQL Apply) and references a schema controlled by SQL Apply - it does not fire. However it DOES need to be valid or it breaks SQL Apply.
My testing was as follows:
Primary DB
Create new EMP table in CSPAN schema on Primary
Create new table TRIGGER_LOG in CSPAN schema on Primary
Create AFTER INSERT/UPDATE trigger on CSPAN.EMP table (that inserts into TRIGGER_LOG table)
**All of the above replicates to Standby**
Standby DB
Create new table TRIGGER_LOG_STNDBY in CSPAN201105 schema on Primary
Create new trigger in CSPAN201105 schema that fires on INSERT/UPDATE on CSPAN.EMP but that inserts into CSPAN201105.TRIGGER_LOG_STNDBY table)
Primary DB
Insert 4 rows into CSPAN.EMP
Update 2 rows in CSPAN.EMP
TRIGGER_LOG table has 6 rows as expected
Standby DB
TRIGGER_LOG table has 6 rows as expected
TRIGGER_LOG_STNDBY table has **0 rows**
Re-create trigger in CSPAN201105 schema that fires on INSERT/UPDATE on CSPAN.EMP but that inserts into CSPAN201105.TRIGGER_LOG_STNDBY table) **but with syntax error**
Primary DB
Update 1 row in CSPAN.EMP
TRIGGER_LOG table has 7 rows as expected
Standby DB
SQL Apply is broken - ORA-04098: trigger 'CSPAN201105.TEST_TRIGGER_TRG' is invalid and failed re-validation -
Read statement is not supported in the BADI?
Hi,
Iam using read statement in the BADI.But iam getting error "READ dbtab" is not supported in the OO Context.How to change the below logic.
BADI Parameters is
NEW_INNNN importing type WPLOG.
My code is :
data: t_INNNN like NEW_INNNN.
data: wa_INNNN like NEW_INNNN.
t_INNNN = NEW_INNNN.
READ TABLE t_INNNN INTO wa_image INDEX c_1.
Please suggest me.
Regards,
SujanBut in same BADI for other method some body using same logic.But there Read table was supported.
see logic what they implemented.
if sy-ucomm = 'UPD' or sy-ucomm = 'SAVE'.
t_image = new_image.
t_image1 = old_image.
READ TABLE t_image1 INTO wa_image1 INDEX 1.
IF sy-subrc EQ 0.
IF wa_image1-infty = c_1007.
READ TABLE t_image INTO wa_image INDEX 1.
IF ( wa_image-vdata(2) = c_x0 OR
wa_image-vdata(2) = c_x1 ).
CLEAR wa_image.
LOOP AT t_image INTO wa_image where otype = c_s.
v_objid = wa_image-objid.
v_begda = wa_image-begda.
v_endda = wa_image-endda.
v_open = wa_image-vdata(2).
ENDLOOP.
But method parameters is different there.
NEW_IMAGE importing type WPLOG_TAB.
here WPLOG_TAB is line type of WPLOG.
I will use the same code instead of new_image iam using NEW_INNNN.
How i will change the above code by using NEW_INNNN.
Kindly help me.I tried all the ways.But iam getting read table not supporte in BADI.
Regards,
Sujan
Maybe you are looking for
-
ERROR"Unable to convert sender service to an ALE logical system,(Urgent)
Hi All when i am not giving the sender service address in the Receiver Aggrement i am getting this perticular error . <?xml version="1.0" encoding="UTF-8" standalone="yes" ?> - <!-- Call Adapter --> - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Mes
-
Hi Gurus, In webi, while linking from detailed report to summary report, am getting an error. Data 10 11 12 33 while linking on 10, it is showing 33 reports which include 10. but the requirement to show is, while clicking on 10, it should display onl
-
Employee cost booking other then cost center.
Hi SAP Gurus, IN SAP HR in order to book the cost of an employee system provides us info type 0027 in which we just mention the cost center on to which we want to book the wage or travel expenses of an employee. Right now we have a requirement that
-
I've recently installed iMovie 09 and I'm curious where the special effects have gone (lighting, fog, rain, etc.). Are there any third party plug-ins that allow this. I've searched the boards and haven't come up with anything. Any help / suggestion i
-
Can anyone tell me , is there any difference between count(*) and count(1) I believe count(1) is faster than count(*), as it is taking count for 1st column. Is there any other difference or my belief is not right. Can anyone plz explain it with examp