Need of buffering in table creation.
what is need of buffering in table creation and data class and delivey class
Definition
The name table (nametab) contains the table and field definitions that are activated in the SAP System. An entry is made in the Repository buffer when a mass activator or a user (using the ABAP Dictionary, Transaction SE11) requests to activate a table. The corresponding name table is then generated from the information that is managed in the Repository.
The Repository buffer is mainly known as the nametab buffer (NTAB), but it is also known as the ABAP Dictionary buffer
The description of a table in the Repository is distributed among several tables (for field definition, data element definition and domain definition). This information is summarized in the name table. The name table is saved in the following database tables:
DDNTT (table definitions)
DDNTF (field descriptions)
The Repository buffer consists of four buffers in shared memory, one for each of the following:
Table definitions TTAB buffer Table DDNTT
Field descriptions FTAB buffer Table DDNTF
Initial record layouts IREC buffer Contains the record layout initialized depending on the field type
There are two kinds of table buffers:
Partial table buffers
Generic table buffers
Check this link for more details.
http://www.abapprogramming.blogspot.com/2007/11/buffering-in-sap-abap.html
Regards,
Similar Messages
-
Need help in SQL table creation
Hi All,
I created a table a month back.Now i need to create another table of the same structure.
Is there any way so dat i can get the script of the table which i created earlier and use the same to create another.
Or is there another way so that we can create a table with same structure of the existing table.
Please help.
Regards,
MohanCheck out the [DBMS_METADATA.GET_DDL|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_metada.htm#i1019414] function.
Example:
SQL> SET LONG 5000
SQL> SELECT DBMS_METADATA.GET_DDL('TABLE','EMP','SCOTT') FROM DUAL;
DBMS_METADATA.GET_DDL('TABLE','EMP','SCOTT')
CREATE TABLE "SCOTT"."EMP"
( "EMPNO" NUMBER(4,0),
"ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9),
"MGR" NUMBER(4,0),
"HIREDATE" DATE,
"SAL" NUMBER(7,2),
"COMM" NUMBER(7,2),
"DEPTNO" NUMBER(2,0),
CONSTRAINT "PK_EMP" PRIMARY KEY ("EMPNO")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS" ENABLE,
CONSTRAINT "FK_DEPTNO" FOREIGN KEY ("DEPTNO")
REFERENCES "SCOTT"."DEPT" ("DEPTNO") ENABLE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "USERS"Edited by: Centinul on Jan 11, 2010 8:01 AM -
Static Tables Creation In oracle & Diff Between Static table ,Dynamic table
Static Tables Creation In oracle & Diff Between Static table ,Dynamic table
972471 wrote:
Static Tables Creation In oracle & Diff Between Static table ,Dynamic tableAll tables in a well designed application should be static tables.
Whilst it's possible to execute dynamic code and therefore create tables dynamically at run time, this is considered poor design and should be avoided in 99.99% of cases... though for some reason some people still think they need to do this, and it's never really justified.
So what issue are you facing that you need to even bother considering dynamic tables? -
HI ALL,
I NEED FIELDS FROM VBFA TABLE
THE FIELDS I WANT IS :
CUSTOMER-ID
CUSTOMER NAME
CONTACT NAME
PROJECTID
ORDER NO
SALES MAN ID
ORDER PROCESS DATE
INVOICE DATE
GROSS AMOUNT
NET AMOUNT POSTAL CODE.
THANKS & REGARDS,
R.VINOD.Hi Vinod..
Try this Code. I made all the modifications in your code .. It will solve ur issues..
REPORT zsdr_omvsa40.
TYPE-POOLS
TYPE-POOLS: slis.
TABLE DECLARATIONS
TABLES : vbak, vbkd,
zzvbak,
kna1, vbrk, vbrp, knvp .
INTERNALTABLE DECLARATION *
DATA: BEGIN OF i_vbak OCCURS 0,
vbelv LIKE vbfa-vbelv, " Sales Order no
vbeln like vbfa-vbeln, "Invoice No
erdat LIKE vbak-erdat, " Date on Which Record Was Created
kunnr LIKE vbak-kunnr,
ps_psp_pnr LIKE vbak-ps_psp_pnr, " Work Breakdown Structure Element
END OF i_vbak.
*DATA : BEGIN OF i_zzvbak OCCURS 0,
*vbeln LIKE zzvbak-vbeln,
*zssidc LIKE zzvbak-zssidc, "Salesman ID
*END OF i_zzvbak.
DATA : BEGIN OF i_vbrk OCCURS 0,
vbeln LIKE vbrk-vbeln,
fkdat LIKE vbrk-fkdat, "Invoice Date
END OF i_vbrk.
DATA : BEGIN OF i_kna1 OCCURS 0,
kunnr LIKE kna1-kunnr , " Customer Number 1
name1 LIKE kna1-name1, " Customer Name
pstlz LIKE kna1-pstlz , " Postal Code
END OF i_kna1.
DATA : BEGIN OF i_vbrp OCCURS 0,
vbeln LIKE vbrp-vbeln,
aubel LIKE vbrp-aubel,
netwr LIKE vbrp-netwr , " Net Value in Document Currency
kzwi1 LIKE vbrp-kzwi1, " Subtotal 1 from pricing procedure for condition
erdat LIKE vbrp-erdat, "Billing document.
END OF i_vbrp.
DATA : BEGIN OF i_knvp OCCURS 0,
parvw LIKE knvp-parvw , " Partner Function
kunnr LIKE knvp-kunnr ,
parnr LIKE knvp-parnr , " Number of contact person
END OF i_knvp .
DATA : BEGIN OF i_data OCCURS 0,
erdat LIKE vbak-erdat, " Date on Which Record Was Created
vbeln LIKE vbak-vbeln, " Sales Order no
fkdat LIKE vbrk-fkdat, " Invoice date.
kunnr LIKE kna1-kunnr , " Customer Number
ps_psp_pnr LIKE vbak-ps_psp_pnr, " Work Breakdown Structure Element
name1 LIKE kna1-name1, " Customer Name
netwr LIKE vbrp-netwr , " Net Value in Document Currency
kzwi1 LIKE vbrp-kzwi1, " Subtotal 1 from pricing procedure for condition
parvw LIKE knvp-parvw , " Partner Function
parnr LIKE knvp-parnr , " Number of contact person
*zssidc LIKE zzvbak-zssidc, "Salesman ID
pstlz LIKE kna1-pstlz , " Postal Code
END OF i_data.
ALV Declaraton
DATA: fieldcatalog TYPE slis_t_fieldcat_alv WITH HEADER LINE,
gd_tab_group TYPE slis_t_sp_group_alv,
gd_layout TYPE slis_layout_alv,
it_listheader TYPE slis_t_listheader,
gd_repid LIKE sy-repid.
Selection - Screen
SELECTION-SCREEN : BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
SELECT-OPTIONS creation FOR vbak-erdat . " Sales Order Date
SELECT-OPTIONS period FOR vbrk-fkdat . " Invoice Date
SELECT-OPTIONS order FOR vbak-vbeln . " Sales order no
SELECT-OPTIONS name FOR kna1-name1 . " Customer Name
SELECT-OPTIONS contact FOR knvp-parnr . " Contact Name.
*SELECT-OPTIONS ssid FOR zzvbak-zssidc . " Salesman ID
SELECT-OPTIONS project FOR vbak-ps_psp_pnr . " Work Breakdown Structure Element
SELECTION-SCREEN : END OF BLOCK b1.
START-OF-SELECTION
START-OF-SELECTION.
PERFORM data_retrieval.
PERFORM build_fieldcatalog.
PERFORM BUILD_LAYOUT.
PERFORM top_of_page.
PERFORM fill_listheader USING it_listheader.
PERFORM display_alv_report.
END-OF-SELECTION.
*TOP-OF-PAGE.
TOP-OF-PAGE.
END-OF-PAGE.
*& Form BUILD_FIELDCATALOG
text
FORM build_fieldcatalog.
fieldcatalog-fieldname = 'KUNNR'.
fieldcatalog-seltext_m = 'Sold to Party'.
fieldcatalog-col_pos = 0.
fieldcatalog-outputlen = 10.
fieldcatalog-emphasize = 'X'.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'NAME1'.
fieldcatalog-seltext_m = 'Hlev Customer'.
fieldcatalog-col_pos = 1.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'PARNR'.
fieldcatalog-seltext_m = 'Contact name'.
fieldcatalog-col_pos = 2.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'PS_PSP_PNR'.
fieldcatalog-seltext_m = 'Project ID'.
fieldcatalog-col_pos = 3.
fieldcatalog-do_sum = 'X'.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'VBELN'.
fieldcatalog-seltext_m = 'Sales Document Type'.
fieldcatalog-col_pos = 4.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'ZSSIDC'.
fieldcatalog-seltext_m = 'SSID'.
fieldcatalog-col_pos = 5.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'ERDAT'.
fieldcatalog-seltext_m = 'so date'.
fieldcatalog-col_pos = 6.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'FKDAT'.
fieldcatalog-seltext_m = 'inv date'.
fieldcatalog-col_pos = 7.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'KWZI1'.
fieldcatalog-seltext_m = 'gross amt'.
fieldcatalog-col_pos = 8.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'NETWR'.
fieldcatalog-seltext_m = 'net amt'.
fieldcatalog-col_pos = 9.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
fieldcatalog-fieldname = 'PSTLZ'.
fieldcatalog-seltext_m = 'Postal code'.
fieldcatalog-col_pos = 10.
APPEND fieldcatalog TO fieldcatalog.
CLEAR fieldcatalog.
ENDFORM. "BUILD_FIELDCATALOG
*& Form DATA_RETRIEVAL
text
FORM data_retrieval.
SELECT VBFAvbelv VBFAvbeln
VBAKerdat VBAKkunnr VBAK~ps_psp_pnr
INTO TABLE i_vbak
FROM VBFA
INNER JOIN vbak
ON VBFAVBELV = VBAKVBELN
WHERE VBAK~erdat IN creation
AND VBFA~vbelV IN ORDER
AND VBAK~ps_psp_pnr IN project
AND VBFA~VBTYP_N = 'M' "Subsequent doc is Invoice
AND VBFA~VBTYP_V = 'C' "Prec doc is Sales order
IF NOT i_vbak[] IS INITIAL.
**Change of ORDER in SELECTS HERE
SELECT vbeln fkdat FROM vbrk INTO TABLE i_vbrk
FOR ALL ENTRIES IN i_vbak
WHERE vbeln = i_vbak-vbeln
AND fkdat IN period.
IF NOT i_vbrk[] IS INITIAL.
SELECT vbeln aubel netwr kzwi1 FROM vbrp INTO TABLE i_vbrp
FOR ALL ENTRIES IN i_vbrk
WHERE VBELN = i_vbrk-vbeln.
endif.
SELECT kunnr name1 pstlz FROM kna1 INTO TABLE i_kna1 FOR ALL ENTRIES IN i_vbak
WHERE kunnr = i_vbak-kunnr
AND name1 IN name.
*SELECT vbeln zssidc FROM zzvbak INTO TABLE i_zzvbak FOR ALL ENTRIES IN i_vbak
*WHERE vbeln = i_vbak-vbeln
*AND zssidc IN ssid .
select netwr kzwi1 erdat from vbrp into table i_vbrp for all entries in i_vbak
where erdat = i_vbak-erdat.
SELECT kunnr parnr parvw FROM knvp INTO CORRESPONDING FIELDS OF TABLE i_knvp FOR ALL ENTRIES IN i_vbak
WHERE kunnr = i_vbak-kunnr
AND parvw = 'AP'
AND parnr IN contact.
ENDIF .
SORT I_VBAK BY VBELN .
SORT I_VBRK BY VBELN .
LOOP AT i_vbrp. "Invoice Item data
MOVE i_vbrp-netwr TO i_data-netwr .
MOVE i_vbrp-kzwi1 TO i_data-kzwi1.
READ table I_VBAK WITH KEY VBELN = I_VBRP-VBELN BINARY SEARCH. "Sales Order info
IF SY-SUBRC = 0.
MOVE I_VBAK-VBELV TO I_DATA-VBELN. "Sales Order no
MOVE I_VBAK-erdat TO I_DATA-erdat. " Date on Which Record Was Created
MOVE I_VBAK-kunnr TO I_DATA-KUNNR. "Customer No
MOVE I_VBAK-ps_psp_pnr TO I_DATA-ps_psp_pnr. " Work Breakdown Structure Element
endif.
READ TABLE I_VBRK WITH KEY VBELN = I_VBRP-VBELN BINARY SEARCH. "Invoice header info
IF SY-SUBRC = 0.
MOVE i_vbrk-fkdat TO i_data-fkdat.
endif.
READ TABLE I_KNA1 WITH KEY KUNNR = I_VBAK-KUNNR BINARY SEARCH. "Customer info
IF SY-SUBRC = 0.
MOVE i_kna1-kunnr TO i_data-kunnr.
MOVE i_kna1-name1 TO i_data-name1.
MOVE i_kna1-pstlz TO i_data-pstlz .
endif.
READ TABLE I_KNvp WITH KEY KUNNR = I_VBAK-KUNNR BINARY SEARCH. "Partner info
IF SY-SUBRC = 0.
MOVE i_knvp-parnr TO i_data-parnr.
endif.
APPEND i_data.
ENDLOOP.
ENDFORM. "DATA_RETRIEVAL
*& Form DISPLAY_ALV_REPORT
text
FORM display_alv_report.
GD_REPID = SY-REPID.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
I_INTERFACE_CHECK = ' '
I_BYPASSING_BUFFER = ' '
I_BUFFER_ACTIVE = ' '
i_callback_program = sy-repid
I_CALLBACK_PF_STATUS_SET = ' '
I_CALLBACK_USER_COMMAND = ' '
i_callback_top_of_page = 'TOP_OF_PAGE'
I_CALLBACK_HTML_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_END_OF_LIST = ' '
I_STRUCTURE_NAME =
I_BACKGROUND_ID = ' '
I_GRID_TITLE =
I_GRID_SETTINGS =
IS_LAYOUT =
it_fieldcat = fieldcatalog[]
IT_EXCLUDING =
IT_SPECIAL_GROUPS =
IT_SORT =
IT_FILTER =
IS_SEL_HIDE =
I_DEFAULT = 'X'
I_SAVE = ' '
IS_VARIANT =
IT_EVENTS =
IT_EVENT_EXIT =
IS_PRINT =
IS_REPREP_ID =
I_SCREEN_START_COLUMN = 0
I_SCREEN_START_LINE = 0
I_SCREEN_END_COLUMN = 0
I_SCREEN_END_LINE = 0
I_HTML_HEIGHT_TOP = 0
I_HTML_HEIGHT_END = 0
IT_ALV_GRAPHICS =
IT_HYPERLINK =
IT_ADD_FIELDCAT =
IT_EXCEPT_QINFO =
IR_SALV_FULLSCREEN_ADAPTER =
IMPORTING
E_EXIT_CAUSED_BY_CALLER =
ES_EXIT_CAUSED_BY_USER =
TABLES
t_outtab = i_data.
EXCEPTIONS
PROGRAM_ERROR = 1
OTHERS = 2 .
*IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
*ENDIF.
ENDFORM. "DISPLAY_ALV_REPORT
FORM FOR FILLING LISTHEADER *
FORM fill_listheader USING it_listheader TYPE slis_t_listheader.
DATA : wa_listheader TYPE slis_listheader.
wa_listheader-typ = 'H'.
wa_listheader-info = 'Noel Gifts International Limited '.
APPEND wa_listheader TO it_listheader.
wa_listheader-typ = 'S'.
wa_listheader-info = 'CUSTOMER CREDIT EXCEPTION REPORT' .
APPEND wa_listheader TO it_listheader.
CLEAR wa_listheader.
ENDFORM. "fill_listheader
*& Form top_of_page
text
FORM top_of_page.
CALL FUNCTION 'REUSE_ALV_COMMENTARY_WRITE'
EXPORTING
it_list_commentary = it_listheader.
ENDFORM. "top_of_page
REWARD IF HELPFUL. -
DW Tables Creation - Columns Missing
Hi All,
When i run the ETL for the execution plan containing tasks related to SCM, some tasks fail, predominantly of which are due to columns missing in the DW tables.
I dropped and recreated the DW tables ( Tools -> ETLMgmt -> Configure) but it didnt help.
I inspected the ctl file, oracle_bi_dw.ctl in ( \OracleBI\DAC\conf\sqlgen\ctl-file), which i believe is used to create the DW tables. The CTL file has definitions for all the tables needed but certain columns are missing, and due to this my ETL fails . I tried creating the columns manually after each ETL task failed.
I need guidance into how to recreate the tables or which configuration step to redo to get the correct CTL file used for the DW table creation.
Thanks,
RaghavHi All,
Re-installing BIApps did not work . I guess the ctl is created while you import the DAC metadata.
I copied the ctl file from a fellow developer. I dropped and created all the tables. All the DW table definitions referenced from the new ctl file are fine now.
Now i would always keep a backup of a proper ctl file to face such future problems :)
Thanks,
Raghav -
Editable Table Creation using BSP Application
hi all,
I just want to create editable table creation if i enter any datas to that table.. it should be saved in Z table..
Could anyone explain me the procedure...
Thanks in Advance
HemaHi,
This is more a question for the BSP forum.
Anyway, as such it's realy easy since you can use HTML in order to import to Excel. All you need to do is add
runtime->server->response->set_header_field( name = 'Contnet-Type'
value = 'application/vnd.ms-excel' ).
runtime->server->response->delete_header_field( name = 'Cache-Control' ).
runtime->server->response->delete_header_field( name = 'Expires' ).
runtime->server->response->delete_header_field( name = 'Pragma' ).
Also check threads like
Download BSP data into Excel
export bsp-table to excel
Export BSP Table to Excel
Eddy
PS. Reward useful answers and earn points yourself -
Issue with DWH DB tables creation
Hi,
While generating Datawarehouse tables (sec 4.10.1 How to Create Data Warehouse Tables), i have landed up with error that states "Creating Datawarehouse tables Failure'
But when i checked in the log file 'generate_ctl.log', it have the below message:
+"Schema will be created from the following containers:+
Oracle 11.5.10
Oracle R12
Universal
Conflict(s) between containers:
Table Name : W_BOM_ITEM_FS
Column Name: INTEGRATION_ID.
+The column properties that are different :[keyTypeCode]+
Success! "
When i checked in the DWH Database, i could found DWH tables but not sure whether all tables were created?
Can anyone tell me whether my DWH tables are all created? How many tables would be created for the above EBS containers?
Also, should i need to drop any of EBS container to create DWH tables successfully?
The Installation guide states when DWH tables creation fails then 'createtables.log' won't be created. But, in my case, this log file got created!
Edited by: userOO7 on Nov 19, 2008 2:41 PMI saw the same message. I also noticed I am unable to load any BOM Items into that fact table. It looks like the BOM_EXPLODER package call is not keeping any rows in BOM_EXPLOSION_TEMP, so no rows are loaded into that fact table. Someone needs to log an SR for this.
*****START LOAD SESSION*****
Load Start Time: Wed Nov 19 17:13:42 2008
Target tables:
W_BOM_ITEM_FS
READER_2_1_1> BLKR_16019 Read [0] rows, read [0] error rows for source table [BOM_EXPLOSION_TEMP] instance name [mplt_BC_ORA_BOMItemFact.BOM_EXPLOSION_TEMP]
READER_2_1_1> BLKR_16008 Reader run completed.
TRANSF_2_1_1> DBG_21216 Finished transformations for Source Qualifier [mplt_BC_ORA_BOMItemFact.SQ_BOM_EXPLOSION_TEMP]. Total errors [0]
WRITER_2_*_1> WRT_8167 Start loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
WRITER_2_*_1> WRT_8168 End loading table [W_BOM_ITEM_FS] at: Wed Nov 19 17:13:42 2008
WRITER_2_*_1> WRT_8035 Load complete time: Wed Nov 19 17:13:42 2008
LOAD SUMMARY
============
WRT_8036 Target: W_BOM_ITEM_FS (Instance Name: [W_BOM_ITEM_FS])
WRT_8044 No data loaded for this target
WRITER_2__1> WRT_8043 ****END LOAD SESSION*****
WRITER_2_*_1> WRT_8006 Writer run completed.
I now see it is covered in the release notes:
http://download.oracle.com/docs/cd/E12127_01/doc/bia.795/e12087/chapter.htm#CHDFJHHB
1.3.31 No Data Is Loaded Into W_BOM_ITEM_F And W_BOM_ITEM_FS
The mapping SDE_ORA_BOMItemFact needs to call a Stored Procedure (SP) in the Oracle EBS instance, which inserts rows into a global temporary table (duration SYS$SESSION, that is, the data will be lost if the session is closed). This Stored Procedure does not have an explicit commit. The Stored Procedure then needs to read the rows in the temporary table into the warehouse.
In order for the mapping to work, Informatica needs to share the same connection for the SP and the SQL qualifier during ETL.This feature was available in the Informatica 7.X release, but it is not available in the Informatica release 8.1.1 (SP4). As a result, W_BOM_ITEM_FS and W_BOM_ITEM_F are not loaded properly.
Workaround
For all Oracle EBS customers:
Open package body bompexpl.
Look for text "END exploder_userexit;", scroll a few lines above, and add a "commit;" command before "EXCEPTION".
Save and compile the package. -
How to observe a table creation/insertion ?
Hello,
Questions :
1] How can I observe a table insert operation ?
2] When we cut/paste existing table frame what is the order of page items hierarchy creation ? like create text frame first then insert table in it.
Currently I have implemented a code to observe page items creation (of text frames, rectangle frames etc. ) using IID_IHIERARCHY_DOCUMENT and kNewPageItemCmdBoss. This is working fine.
But when I tried to copy/cut-paste existing table frame I get the kNewPageItemCmdBoss for text frame pageitem which contains table frame. but I did not get notification for table creation.
What I want to do is, When user paste table frame I need to access to newly created table and get its model UID (using GetNthModelUID), just after paste process (i.e. during processing of kNewPageItemCmdBoss)
I have tried following code but, it gives me table model count as 0,
Note : following code is added in IID_IHIERARCHY_DOCUMENT protocol observer code (please refer PstLstDocObserver.cpp code snippet of InDesign SDK)
if((theChange == kNewPageItemCmdBoss))
int32 SelectedFrames = itemList.Length();
if (SelectedFrames > 0)
UIDRef frameUIDRef, childUIDRef, parentUIDRef;
UID frameModelUID, childUID, parentUID;
for(int i=0; i<SelectedFrames; i++)
frameUIDRef = itemList.GetRef(i);
IDataBase *db = frameUIDRef.GetDataBase();
InterfacePtr<IHierarchy> itemHierarchy(frameUIDRef, IID_IHIERARCHY);
if(itemHierarchy)
if((itemHierarchy->GetChildCount()) > 0 )
childUID = itemHierarchy->GetChildUID(0);
childUIDRef = UIDRef(db, childUID);
if(childUIDRef)
int32 tableCount = 0;
UIDRef tableModelUIDRef;
InterfacePtr<IMultiColumnTextFrame> mcFrame(childUIDRef, UseDefaultIID());
if (mcFrame)
InterfacePtr<ITextModel> textModel(mcFrame->QueryTextModel(), UseDefaultIID());
if(textModel)
InterfacePtr<ITableModelList> tableList(textModel, UseDefaultIID());
if(tableList)
//Here I get the tableCount as 0
tableCount = tableList->GetModelCount();
InterfacePtr<ITableModel> table(tableList->QueryNthModel(tableCount-1));
if(table)
tableModelUIDRef = ::GetUIDRef(table);
frameModelUID = tableModelUIDRef.GetUID();
Note : using CS6/CC SDK
Thanks,
-HarshObserver attached on kDocBoss will not give notification on table creation as table does not fall into document Hierarchy.
You can try to attach Observer on kDocWorkspaceBoss and see if you get the notification.
Also, as for you problem of not getting table at the time of copy/paste, are you getting the reference for IMultiColumnTextFrame?
You might not get some entities like(XML tags,Labels etc) on page item in kDocObserver as soon as frame is pasted.
You can use a workaround to capture pasted table frame and get it's table by hooking up to responder service 'kNewStorySignalResponderService'
and then use your above logic inside it. -
I've recently completed a database upgrade from 10.2.0.3 to 11.2.0.1 using the DBUA.
I've since encountered a slowdown when running a script which drops and recreates a series of ~250 tables. The script normally runs in around 19 seconds. After the upgrade, the script requires ~2 minutes to run.
By chance has anyone encountered something similar?
The problem may be related to the behavior of an "after CREATE on schema" trigger which grants select privileges to a role through the use of a dbms_job call; between 10g and the database that was upgraded from 10G to 11g. Currently researching this angle.
I will be using the following table creation DDL for this abbreviated test case:
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA;When calling the above DDL, an "after CREATE on schema" trigger is fired which schedules a job to immediately run to grant select privilege to a role for the table which was just created:
create or replace
trigger select_grant
after CREATE on schema
declare
l_str varchar2(255);
l_job number;
begin
if ( ora_dict_obj_type = 'TABLE' ) then
l_str := 'execute immediate "grant select on ' ||
ora_dict_obj_name ||
' to select_role";';
dbms_job.submit( l_job, replace(l_str,'"','''') );
end if;
end;
{code}
Below I've included data on two separate test runs. The first is on the upgraded database and includes optimizer parameters and an abbreviated TKPROF. I've also, included the offending sys generate SQL which is not issued when the same test is run on a 10g environment that has been set up with a similar test case. The 10g test run's TKPROF is also included below.
The version of the database is 11.2.0.1.
These are the parameters relevant to the optimizer for the test run on the upgraded 11g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_capture_sql_plan_baselines boolean FALSE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 11.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
optimizer_use_invisible_indexes boolean FALSE
optimizer_use_pending_statistics boolean FALSE
optimizer_use_sql_plan_baselines boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 03-11-2010 16:33
SYSSTATS_INFO DSTOP 03-11-2010 17:03
SYSSTATS_INFO FLAGS 0
SYSSTATS_MAIN CPUSPEEDNW 713.978495
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM 1565.746
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED 2310
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Output from TKPROF on the 11g SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 324
{code}
... large section omitted ...
Here is the performance hit portion of the TKPROF on the 11g SID:
{code}
SQL ID: fsbqktj5vw6n9
Plan Hash: 1443566277
select next_run_date, obj#, run_job, sch_job
from
(select decode(bitand(a.flags, 16384), 0, a.next_run_date,
a.last_enabled_time) next_run_date, a.obj# obj#,
decode(bitand(a.flags, 16384), 0, 0, 1) run_job, a.sch_job sch_job from
(select p.obj# obj#, p.flags flags, p.next_run_date next_run_date,
p.job_status job_status, p.class_oid class_oid, p.last_enabled_time
last_enabled_time, p.instance_id instance_id, 1 sch_job from
sys.scheduler$_job p where bitand(p.job_status, 3) = 1 and
((bitand(p.flags, 134217728 + 268435456) = 0) or
(bitand(p.job_status, 1024) <> 0)) and bitand(p.flags, 4096) = 0 and
p.instance_id is NULL and (p.class_oid is null or (p.class_oid is
not null and p.class_oid in (select b.obj# from sys.scheduler$_class b
where b.affinity is null))) UNION ALL select
q.obj#, q.flags, q.next_run_date, q.job_status, q.class_oid,
q.last_enabled_time, q.instance_id, 1 from sys.scheduler$_lightweight_job
q where bitand(q.job_status, 3) = 1 and ((bitand(q.flags, 134217728 +
268435456) = 0) or (bitand(q.job_status, 1024) <> 0)) and
bitand(q.flags, 4096) = 0 and q.instance_id is NULL and (q.class_oid
is null or (q.class_oid is not null and q.class_oid in (select
c.obj# from sys.scheduler$_class c where
c.affinity is null))) UNION ALL select j.job, 0,
from_tz(cast(j.next_date as timestamp), to_char(systimestamp,'TZH:TZM')
), 1, NULL, from_tz(cast(j.next_date as timestamp),
to_char(systimestamp,'TZH:TZM')), NULL, 0 from sys.job$ j where
(j.field1 is null or j.field1 = 0) and j.this_date is null) a order by
1) where rownum = 1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.47 0.47 0 9384 0 1
total 3 0.48 0.48 0 9384 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: SYS (recursive depth: 1)
Rows Row Source Operation
1 COUNT STOPKEY (cr=9384 pr=0 pw=0 time=0 us)
1 VIEW (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=6615380 card=194570)
1 SORT ORDER BY STOPKEY (cr=9384 pr=0 pw=0 time=0 us cost=5344 size=11479630 card=194570)
194790 VIEW (cr=9384 pr=0 pw=0 time=537269 us cost=2563 size=11479630 card=194570)
194790 UNION-ALL (cr=9384 pr=0 pw=0 time=439235 us)
231 FILTER (cr=68 pr=0 pw=0 time=920 us)
231 TABLE ACCESS FULL SCHEDULER$_JOB (cr=66 pr=0 pw=0 time=690 us cost=19 size=13157 card=223)
1 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=2 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
1 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=1 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
0 FILTER (cr=3 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SCHEDULER$_LIGHTWEIGHT_JOB (cr=3 pr=0 pw=0 time=0 us cost=2 size=95 card=1)
0 TABLE ACCESS BY INDEX ROWID SCHEDULER$_CLASS (cr=0 pr=0 pw=0 time=0 us cost=1 size=40 card=1)
0 INDEX UNIQUE SCAN SCHEDULER$_CLASS_PK (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=1)(object id 5056)
194559 TABLE ACCESS FULL JOB$ (cr=9313 pr=0 pw=0 time=167294 us cost=2542 size=2529254 card=194558)
{code}
and the totals at the end of the TKPROF on the 11g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 4 0
Fetch 0 0.00 0.00 0 0 0 0
total 3 0.00 0.00 0 0 4 0
Misses in library cache during parse: 1
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 70 0.00 0.00 0 0 0 0
Execute 85 0.01 0.01 0 62 208 37
Fetch 49 0.48 0.49 0 9490 0 35
total 204 0.51 0.51 0 9552 208 72
Misses in library cache during parse: 5
Misses in library cache during execute: 3
35 user SQL statements in session.
53 internal SQL statements in session.
88 SQL statements in session.
Trace file: 11gSID_ora_17721.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
35 user SQL statements in trace file.
53 internal SQL statements in trace file.
88 SQL statements in trace file.
51 unique SQL statements in trace file.
1590 lines in trace file.
18 elapsed seconds in trace file.
{code}
The version of the database is 10.2.0.3.0.
These are the parameters relevant to the optimizer for the test run on the 10g SID:
{code}
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.3
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter db_file_multi
NAME TYPE VALUE
db_file_multiblock_read_count integer 8
SQL> show parameter db_block_size
NAME TYPE VALUE
db_block_size integer 8192
SQL> show parameter cursor_sharing
NAME TYPE VALUE
cursor_sharing string EXACT
SQL> column sname format a20
SQL> column pname format a20
SQL> column pval2 format a20
SQL> select sname, pname, pval1, pval2 from sys.aux_stats$;
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 09-24-2007 11:09
SYSSTATS_INFO DSTOP 09-24-2007 11:09
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 2110.16949
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN SREADTIM
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN SLAVETHR
13 rows selected.
{code}
Now for the TKPROF of a mirrored test environment running on a 10G SID:
{code}
create table ALLIANCE (
ALLIANCEID NUMBER(10) not null,
NAME VARCHAR2(40) not null,
CREATION_DATE DATE,
constraint PK_ALLIANCE primary key (ALLIANCEID)
using index
tablespace LIVE_INDEX
tablespace LIVE_DATA
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.01 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.01 0 2 16 0
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 113
{code}
... large section omitted ...
Totals for the TKPROF on the 10g SID:
{code}
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 2 16 0
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.02 0 2 16 0
Misses in library cache during parse: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 65 0.01 0.01 0 1 32 0
Execute 84 0.04 0.09 20 90 272 35
Fetch 88 0.00 0.10 30 281 0 64
total 237 0.07 0.21 50 372 304 99
Misses in library cache during parse: 38
Misses in library cache during execute: 32
10 user SQL statements in session.
76 internal SQL statements in session.
86 SQL statements in session.
Trace file: 10gSID_ora_32003.trc
Trace file compatibility: 10.01.00
Sort options: default
1 session in tracefile.
10 user SQL statements in trace file.
76 internal SQL statements in trace file.
86 SQL statements in trace file.
43 unique SQL statements in trace file.
949 lines in trace file.
0 elapsed seconds in trace file.
{code}
Edited by: user8598842 on Mar 11, 2010 5:08 PMSo while this certainly isn't the most elegant of solutions, and most assuredly isn't in the realm of supported by Oracle...
I've used the DBMS_IJOB.DROP_USER_JOBS('username'); package to remove the 194558 orphaned job entries from the job$ table. Don't ask, I've no clue how they all got there; but I've prepared some evil looks to unleash upon certain developers tomorrow morning.
Not being able to reorganize the JOB$ table to free the now wasted ~67MB of space I've opted to create a new index on the JOB$ table to sidestep the full table scan.
CREATE INDEX SYS.JOB_F1_THIS_NEXT ON SYS.JOB$ (FIELD1, THIS_DATE, NEXT_DATE) TABLESPACE SYSTEM;The next option would be to try to find a way to grant the select privilege to the role without using the aforementioned "after CREATE on schema" trigger and dbms_job call. This method was adopted to cover situations in which a developer manually added a table directly to the database rather than using the provided scripts to recreate their test environment.
I assume that the following quote from the 11gR2 documentation is mistaken, and there is no such beast as "create or replace table" in 11g:
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_9003.htm#i2061306
"Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters. Truncating and replacing have none of these effects. Therefore, removing rows with the TRUNCATE statement or replacing the table with a *CREATE OR REPLACE TABLE* statement can be more efficient than dropping and re-creating a table." -
Need To Create a table in Sql Server and do some culculation into the table from Oracle and Sql
Hello All,
I'm moving a data from Oracle to Sql Server with ETL (80 tables with data) and i want to track the number of records that i moving on the daily basis , so i need to create a table in SQL Server, wilth 4 columns , Table name, OracleRowsCount, SqlRowCount,
and Diff(OracleRowsCount - SqlRowCount) that will tell me the each table how many rows i have in Oracle, how many rows i have in SQL after ETL load, and different between them, something like that:
Table Name OracleRowsCount SqlRowCount Diff
Customer 150 150
0
Sales 2000 1998
2
Devisions 5 5
0
(I can add alot of SQL Tasks and variables per each table but it not seems logicly to do that, i tryid to find a way to deal with that in vb but i didn't find)
What the simplest way to do it ?
Thank you
Best Regards
DanielHi Daniel,
According to your description, what you want is an indicator to show whether all the rows are inserted to the destination table. To achieve your goal, you can add a Row Count Transformation following the OLE DB Destination, and redirect bad rows to the Row
Count Transformation. This way, we can get the count of the bad rows without redirecting these rows. Since the row count value is stored in a variable, we can create another string type variable to retrieve the row count value from the variable used by the
Row Count Transformation, and then use a Send Mail Task to send the row count value in an email message body. You can also insert the row count value to the SQL Server table through Execute SQL Task. Then, you can check whether bad rows were generated in the
package by querying this table.
Regards,
Mike Yin
TechNet Community Support -
Table creation - order of events
I am trying to get some help on the order I should be carrying out table creation tasks.
Say I create a simple table:
create table title (
title_id number(2) not null,
title varchar2(10) not null,
effective_from date not null,
effective_to date not null,
constraint pk_title primary key (title_id)
I believe I should populate the data, then create my index:
create unique index title_title_id_idx on title (title_id asc)
But I have read that Oracle will automatically create an index for my primary key if I do not do so myself.
At what point does Oracle create the index on my behalf and how do I stop it?
Should I only apply the primary key constraint after the data has been loaded as well?
Even then, if I add the primary key constraint will Oracle not immediately create an index for me when I am about to create a specific one matching my naming conventions?yeah but just handle it the way you would handle any other constraint violation - with the EXCEPTIONS INTO clause...
SQL> select index_name, uniqueness from user_indexes
2 where table_name = 'APC'
3 /
no rows selected
SQL> insert into apc values (1)
2 /
1 row created.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
Table altered.
SQL> insert into apc values (2)
2 /
insert into apc values (2)
ERROR at line 1:
ORA-00001: unique constraint (APC.APC_PK) violated
SQL> alter table apc drop constraint apc_pk
2 /
Table altered.
SQL> insert into apc values (2)
2 /
1 row created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> @%ORACLE_HOME%/rdbms/admin/utlexcpt.sql
Table created.
SQL> alter table apc add constraint apc_pk primary key (col1)
2 using index ( create unique index my_new_index on apc (col1))
3 exceptions into EXCEPTIONS
4 /
alter table apc add constraint apc_pk primary key (col1)
ERROR at line 1:
ORA-02437: cannot validate (APC.APC_PK) - primary key violated
SQL> select * from apc where rowid in ( select row_id from exceptions)
2 /
COL1
2
2
SQL> All this is in the documentation. Find out more.
Cheers, APC -
How to create monthly table creation?
Hi Mates,
Unable to create table by month in analytic database but load the data to the previous table continuous as attached screenshot, Schema user has the creation privilege. We are using Webcenter interaction 10gR4.
How to create monthly table creation please?
Thanks,
KatherineHi Trevor,
Thanks for your help. We were able to create table and load data till Apr as attached.
However the analytic user privilege has been modified on Apr due to server operation.
Since then, there was a message saying there is no permission to create tables in the analytic log,
analytic user privilege has been granted after checked this message, As I suspected, the issue occurred after modifying analytic user privilege.
Currently, analytic users are granted with all privilege.
Any idea please?
Thanks,
Kathy -
Guys,
I need to update table A columns col3, col4, col5 and col6 by table b columns col3, col4, col5 and col6 however table b col5 and col6 values need to come from table c col1.
Means table b col5 and col6 have values in it however i need to replace them with value from table c col1 and need to update table a col5 and col6 accordingly.
table a and table b has col1 and col2 in common.
i am trying something like this.
Update a
a.col3 = b.col3,
a.col4 = b.col4,
a.col5 = (select col1 from table_c c where c.col2=b.col5),
a.col6 = (select col1 from table_c c where c.col2=b.col6)
from table_A a inner join table_b
on a.col1=b.col1 and a.col2=b.col2
can someone help me reframe above update query?
thanks in advance for your help.Try the below:(If you have multiple values, then you may need to use TOP 1 as commented code in the below script)
create Table tableA(Col1 int,Col2 int,Col3 int,Col4 int,Col5 int,Col6 int)
Insert into tableA values(1,2,3,4,5,6)
create Table tableB(Col1 int,Col2 int,Col3 int,Col4 int,Col5 int,Col6 int)
Insert into tableB values(1,2,30,40,50,60)
create Table tableC(Col1 int,Col2 int,Col3 int,Col4 int,Col5 int,Col6 int)
Insert into tableC values(100,50,30,40,2,2)
--Insert into tableC values(200,50,30,40,2,2)
Insert into tableC values(100,60,30,40,2,2)
Select * From tablea
Update a Set
a.col3 = b.col3,
a.col4 = b.col4,
a.col5 = (select col1 from tablec c where c.col2=b.col5 ),
a.col6 = (select col1 from tablec c where c.col2=b.col6 )
from tableA a inner join tableb b
on a.col1=b.col1 and a.col2=b.col2
--Update a Set
--a.col3 = b.col3,
--a.col4 = b.col4,
--a.col5 = (select Top 1 col1 from tablec c where c.col2=b.col5 Order by c.Col1 asc),
--a.col6 = (select Top 1 col1 from tablec c where c.col2=b.col6 Order by c.Col1 asc)
--from tableA a inner join tableb b
--on a.col1=b.col1 and a.col2=b.col2
Select * From tablea
Drop table tablea,Tableb,TableC -
Alert message is needed at the time of creation ,if it reaches ,the maximu
DEAR SA EXPERTS,
Can you please me that alert message is needed at the time of creation ,if it reaches ,the maximum stock level (Stock+level)
Thanks
MohitHi,
Please refer the below links.
Alert Message
Re: Alert Message
Hope it helps you.
Thanks. -
DYNAMIC INTERNAL TABLE CREATION BASED ON THE CONTENT OF ANOTHER INTERNAL TA
Hi All
I need to create an internal table at runtime.
I have a selection screen parameter which is specific to country, Which can take values below
eg:- IT_AREA for Italy(IT)
FR_AREA for France(FR)
IE_AREA for Ireland(IE).....And similary for other countries
Based on the Above parameter, I need to create Internal Table as below
DATA: itab TYPE italy_data Occurs 0.
If I declare as above, Then itab has fields from italy_data. And this internal table i will be sending it to Function Module to get data into it.
My Requirement is to Create the Internal table itab during runtime for tables italy_data OR france_data OR ireland_data based on selection screen parameter. Tables on Country may have different number of fields in it.
Can anyone help me on this??Hi,
Here is a sample code to create a dynamic internal table.
REPORT ytrab03.
TABLES: mara, makt.
TYPE-POOLS: slis.
DATA: it_fcat TYPE slis_t_fieldcat_alv,
is_fcat LIKE LINE OF it_fcat,
ls_layout TYPE slis_layout_alv.
DATA: it_fieldcat TYPE lvc_t_fcat,
is_fieldcat LIKE LINE OF it_fieldcat.
DATA: new_table TYPE REF TO data,
new_line TYPE REF TO data,
ob_cont_alv TYPE REF TO cl_gui_custom_container,
ob_alv TYPE REF TO cl_gui_alv_grid,
vg_campos(255) TYPE c,
i_campos LIKE TABLE OF vg_campos,
vg_campo(30) TYPE c,
vg_tables(60) TYPE c.
DATA: e_params LIKE zutsvga_alv_01.
FIELD-SYMBOLS: <l_table> TYPE table,
<l_line> TYPE ANY,
<l_field> TYPE ANY.
PARAMETERS: p_max(2) TYPE n DEFAULT '20' OBLIGATORY.
is_fcat-fieldname = 'COL01'.
is_fcat-ref_fieldname = 'MATNR'.
is_fcat-ref_tabname = 'MARA'.
APPEND is_fcat TO it_fcat.
is_fcat-fieldname = 'COL02'.
is_fcat-ref_fieldname = 'MAKTX'.
is_fcat-ref_tabname = 'MAKT'.
APPEND is_fcat TO it_fcat.
LOOP AT it_fcat INTO is_fcat.
is_fieldcat-fieldname = is_fcat-fieldname.
is_fieldcat-ref_field = is_fcat-ref_fieldname.
is_fieldcat-ref_table = is_fcat-ref_tabname.
APPEND is_fieldcat TO it_fieldcat.
CONCATENATE is_fieldcat-ref_table is_fieldcat-ref_field
INTO vg_campos SEPARATED BY '~'.
APPEND vg_campos TO i_campos.
ENDLOOP.
*... Create the dynamic internal table
CALL METHOD cl_alv_table_create=>create_dynamic_table
EXPORTING
it_fieldcatalog = it_fieldcat
IMPORTING
ep_table = new_table.
*... Create a new line
ASSIGN new_table->* TO <l_table>.
CREATE DATA new_line LIKE LINE OF <l_table>.
ASSIGN new_line->* TO <l_line>.
SELECT (i_campos) FROM mara INNER JOIN makt
ON mara~matnr = makt~matnr
UP TO p_max ROWS
INTO TABLE <l_table>.
LOOP AT <l_table> INTO <l_line>.
LOOP AT it_fcat INTO is_fcat.
ASSIGN COMPONENT is_fcat-fieldname
OF STRUCTURE <l_line> TO <l_field>.
IF sy-tabix = 1.
WRITE: /2 <l_field>.
ELSE.
WRITE: <l_field>.
ENDIF.
ENDLOOP.
ENDLOOP.
Regards,
Karuna.
Maybe you are looking for
-
This is my first time working with a beta copies of software. I would like to upgrade to Flex 4 Beta 2. I do not see an uninstall for Flex 4 Beta 1. Will Flex 4 Beta 2 install in the same directory as Flex 4 Beta 1?
-
Error during System Copy in SCM system
*Hi We are performing System Copy of SCM system. Java was not running in the system from the starting. we are facing following error while performing system copy* Please Help.. Oct 23, 2009 12:05:38... Info : Successfully parsed! Oct 23, 2009 12:0
-
I was also asked if my other email account from work( [email protected]), was also mine .
-
AAA failures on 3750G running ADVIPServ 12.2(53) SE
I am just banging my head on the wall and I can seem to figure it out. I am trying to configure my 3750G stack to authenticate to my ACS 4.2 server. The configuration is fine and when I look at the debugs I am getting from the switch that it select
-
DQE Service could not started.
Hi All, I installed WAS 6.40 SP7 and then EP6 SP6 on Windows Platform. Installation was successfull. But when I tried to create Database iViews, its not opening the tables for the configured DB. Then i tried to open in Portal content page... System A