Regarding ska1 and skb1 tables
Hi ,
I got the requirement to extract the data from ska1 and skb1 tables, is their any standard datasource maintaining for that or else shall i extract based on generic extraction on view. Is the table which i mentioned is correct or not
This was the requirement which am having
The data which am having in excel is maintained in Budget in the year and budget in the month and previous. And parallel in the same data am having Actuals for the month and Actuals for the previous month.
Hi,
These tables are master data for G/L accounts; there are no figures stored in these tables.
Regards,
Eli
Similar Messages
-
Question regarding Classic and Advance table
Hi,
I have classic and advance table. I am populating with VO that brings 100 record. When i render page i see last record being displayed. How can i show my first record by defult on both tables?By default the first record will be displayed, if you are seeing the last one instead, you probably did a vo.last() in the code.
Thanks
Tapash -
Regarding index and anla table
Hi
i have code as below and i have Select option input
for fields bukrs, anln1 and anln2.
SELECT bukrs anln1 anln2 anlkl txt50 ktogr invnrz sernr deakt
FROM anla
INTO TABLE o_anla
WHERE bukrs IN wa_ex_bukrs.
if sy-subrc = 0.
DELETE o_anla WHERE anln2 NOT IN wa_ex_anln2.
DELETE o_anla WHERE anln1 NOT IN wa_ex_anln1.
endif.
Does the below code change is more faster than above code.?
SELECT bukrs anln1 anln2 anlkl txt50 ktogr invnrz sernr deakt
FROM anla
INTO TABLE o_anla
WHERE bukrs IN wa_ex_bukrs
and anln1 in wa_ex_anln1
and anln2 in wa_ex_anln2.
I feel bukrs and anln1 and anln2 are primary key fields and could increase performance better.
However there is no std index available for this 3 fields. do you think creating secondary index
for this 3 field can increase performance.
Pls confirm.
RegardsHi,
If bukrs, anln1 and anln2 makes the primary key then you are hitting the table with primary index itself(which is always a better option for good performance). Your 2nd query would always do better. No need to create secondary index in this case.
Regards,
Ravi -
Query regarding passivation and PS_TXN tables
Hi All ,
I am working on a read only Dashboard UI where the DB user has only read privileges.
The jbo.server.internal_connection uses the same DB connection to create PS_TXN & PS_TXN_SEQ tables which fails for obvious reasons & I get the error -
"Couldnot create persistence table PS_TXN_seq".
I have disabled passivation at all the VO levels and also disabled jbo.isSupportsPassivation to be false at the AMLocal level , but still I am getting this error.
I have also increased the initial AM pool size in bc4j.xcfg & Connection Pool at the weblogic server level to avoid snapshots been written to this table.
Is there any way I can prevent any interaction with this table as the client too is not interested in any kind of passivation to happen for the time being.
ThanksThanks for your reply Chris.
Chris Muir wrote:
You might be taking the wrong approach to solving this. Rather than disabling the AM pooling (which btw is not supported by Oracle) to the database, instead you can get ADF to passivate to file or memory of the app server.I am confused as to what exactly jbo.ampool.issupportspassivation = false does then ? I read on one of the blogs that its a viable use case for Programatic VOs ?
Also regarding passivating to file system as per http://download.oracle.com/docs/cd/E12839_01/web.1111/b31974/bcstatemgmt.htm#ADFFD1307 , its not really a recommended approach.. hence was not going for that.
Can you please throw some more light ?
Thanks -
Three questions regarding DB_KEEP_CACHE_SIZE and caching tables.
Folks,
In my Oracle 10g db, which I got in legacy. It has the init.ora parameter DB_KEEP_CACHE_SIZE parameter configured to 4GB in size.
Also there are bunch of tables that were created with CACHE turned on for them.
By querying dba_tables table , with CACHE='Y', I can see the name of these tables.
With time, some of these tables have grown in size (no. of rows) and also some of these tables are not required to be cached any longer.
So here is my first question
1) Is there a query I can run , to find out , what tables are currently in the DB_KEEP_CACHE_SIZE.
2) Also how can I find out if my DB_KEEP_CACHE_SIZE is adqueataly sized or needs to be increased in size,as some of these
tables have grown in size.
Third question
I know for fact, that there are 2 tables that do not need to be cached any longer.
So how do I make sure they do not occupy space in the DB_KEEP_CACHE_POOL.
I tried, alter table <table_name> nocache; statement
Now the cache column value for these in dba_tables is 'N', but if I query the dba_segments tables, the BUFFER_POOL column for them still has value of 'KEEP'.
After altering these tables to nocache, I did bounce my database.
Again, So how do I make sure these tables which are not required to be cached any longer, do not occupy space in the DB_KEEP_CACHE_SIZE.
Would very much appreciate your help.
Regards
AshishHello,
1) Is there a query I can run , to find out , what tables are currently in the DB_KEEP_CACHE_SIZE:You may try this query:
select owner, segment_name, segment_type, buffer_pool
from dba_segments
where buffer_pool = 'KEEP'
order by owner, segment_name;
2) Also how can I find out if my DB_KEEP_CACHE_SIZE is adqueataly sized or needs to be increased in size,as some of these tables have grown in size.You may try to get the total size of the Segments using the KEEP BUFFER:
select sum(bytes)/(1024*10124) "Mo"
from dba_segments
where buffer_pool = 'KEEP';To be sure that all the blocks of these segments (Table / Index) won't be often aged out from the KEEP BUFFER, the total size given by the above query should be less than the size of your KEEP BUFFER.
I know for fact, that there are 2 tables that do not need to be cached any longer.
So how do I make sure they do not occupy space in the DB_KEEP_CACHE_POOL.You just have to execute the following statement:
ALTER TABLE <owner>.<table> STORAGE(BUFFER_POOL DEFAULT);Hope this help.
Best regards,
Jean-Valentin -
Regarding ODS and Physical Tables
Hi,
can you tell me the difference between ODS and a physical table created using transaction SE11?
I am writing a program to update this table not by any update rules, it is just updating the table using modify statement.
suggest me which one to use a physical table or ODS?
Thanks,
VijayaHi Vijaya,
You want to insert the records into the table/ODS using some ABAP...(not through update rules)..
In such a case you can do it either way...but going for an ODS will give an added advantage. You will be able to report on the data in ODS.
Hope it helps..
Regards,
Amol ) -
Regarding BP and Premise tables
Hi,
what are the tables that will be updated when we create Business partner and Premise.
Please Reply
Regards,BP details should be there in BUT000 table.
Address Number should be there at BUT020
address details will be there at ADRC
telephone details will be there at ADR6
Please press BUT* in se11 u wiill get the exact description also.
premise : EVBS,EVBST.....
Please Allot points
Regards,
Shiv. -
Improve Performance of Dimension and Fact table
Hi All,
Can any one explain me the steps how to improve performance of Dimension and Fact table.
Thanks in advace....
reddHi!
There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
First of all try to compress as many requests as possible in the fact table and do that regularily.
Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
Good luck!
Kind Regards
Andreas -
Regarding the inbuilt log and audit tables for APEX 4.1
Hi,
When we acces the Administrator login then we can view various logs like the sql commands that have been recently fired,user list for a workspace,access to each application.Where are these data stored and fetched.Also could we get the inbuilt audit and log table for APEX 4.1 ?
Thanks and Regards>
Please update your forum profile with a real handle instead of "935203".
When we acces the Administrator login then we can view various logs like the sql commands that have been recently fired,user list for a workspace,access to each application.Where are these data stored and fetched.Also could we get the inbuilt audit and log table for APEX 4.1 ? This information is available through APEX views. See:
<li>APEX views: Home > Application Builder > Application > Utilities > Application Express Views
<li>Monitoring Activity Within a Workspace
<li>Creating Custom Activity Reports Using APEX_ACTIVITY_LOG
Note that the underlying logs are purged on a regular basis, so for the long term you need to copy the data to your own tables, as Martin handily suggests here. -
Download the KTOPL field data and GLT0 table data into one Internal table
Hi,
I have downloaded GLT0 table fields data to PC file . But i need to download KTOPL(Chart Of Accounts) data also. in GLT0 table there is no KTOPL field.
But in SKA1 table have KTOPL field. Then what is the issue is GLT0 data & KTOPL field data needs to download into one Internal Table.
anybody could you please solve this problem. immediately need to solve this.
Below is the code.
REPORT ZFXXEABL_1 NO STANDARD PAGE HEADING
LINE-SIZE 200.
Tables Declaration
TABLES : GLT0.
Data Declaration
DATA : FP(8) TYPE C,
YEAR LIKE GLT0-RYEAR,
PERIOD(3) TYPE C,
DBALANCE LIKE VBAP-NETWR VALUE 0 ,
CBALANCE LIKE VBAP-NETWR VALUE 0.
*Internal table for for final data..
DATA : BEGIN OF REC1 OCCURS 0,
BAL LIKE GLT0-TSLVT value 0,
COAREA LIKE GLT0-RBUSA,
CA(4) TYPE C,
KTOPL LIKE ska1-ktopl,
CCODE LIKE GLT0-BUKRS,
CREDIT LIKE VBAP-NETWR,
CURRENCY LIKE GLT0-RTCUR,
CURTYPE(2) TYPE N,
DEBIT LIKE VBAP-NETWR,
YEAR(8) TYPE C,
FY(2) TYPE C,
ACCOUNT LIKE GLT0-RACCT,
VER LIKE GLT0-RVERS,
VTYPE(2) TYPE N,
CLNT LIKE SY-MANDT,
S_SYS(3) TYPE C,
INDICATOR LIKE GLT0-DRCRK,
END OF REC1.
DATA : C(2) TYPE N,
D(2) TYPE N.
DATA REC1_H LIKE REC1.
Variable declarations
DATA :
W_FILES(4) TYPE N,
W_DEBIT LIKE GLT0-TSLVT,
W_CREDIT LIKE GLT0-TSLVT,
W_PCFILE LIKE RLGRAP-FILENAME ,
W_UNIXFILE LIKE RLGRAP-FILENAME,
W_PCFILE1 LIKE RLGRAP-FILENAME,
W_UNIXFIL1 LIKE RLGRAP-FILENAME,
W_EXT(3) TYPE C,
W_UEXT(3) TYPE C,
W_PATH LIKE RLGRAP-FILENAME,
W_UPATH LIKE RLGRAP-FILENAME,
W_FIRST(1) TYPE C VALUE 'Y',
W_CFIRST(1) TYPE C VALUE 'Y',
W_PCFIL LIKE RLGRAP-FILENAME.
DATA: "REC LIKE GLT0 OCCURS 0 WITH HEADER LINE,
T_TEMP LIKE GLT0 OCCURS 0 WITH HEADER LINE.
DATA: BEGIN OF REC3 OCCURS 0.
INCLUDE STRUCTURE GLT0.
DATA: KTOPL LIKE SKA1-KTOPL,
END OF REC3.
DATA: BEGIN OF T_KTOPL OCCURS 0,
KTOPL LIKE SKA1-KTOPL,
SAKNR LIKE SKA1-SAKNR,
END OF T_KTOPL.
Download data.
DATA: BEGIN OF I_REC2 OCCURS 0,
BAL(17), " like GLT0-TSLVT value 0,
COAREA(4), " like glt0-rbusa,
CA(4), " chart of accounts
CCODE(4), " like glt0-bukrs,
CREDIT(17), " like vbap-netwr,
CURRENCY(5), " like glt0-rtcur,
CURTYPE(2), " type n,
DEBIT(17), " like vbap-netwr,
YEAR(8), " type c,
FY(2), " type c, fiscal yr variant
ACCOUNT(10), " like glt0-racct,
VER(3), " like glt0-rvers,
VTYPE(3), " type n,
CLNT(3), "like sy-mandt,
S_SYS(3), "like sy-sysid,
INDICATOR(1), " like glt0-drcrk,
END OF I_REC2.
Selection screen. *
SELECTION-SCREEN BEGIN OF BLOCK BL1 WITH FRAME TITLE TEXT-BL1.
SELECT-OPTIONS : COMPCODE FOR GLT0-BUKRS,
GLACC FOR GLT0-RACCT,
FISYEAR FOR GLT0-RYEAR,
no intervals no-extension, "- BG6661-070212
FISCPER FOR GLT0-RPMAX,
busarea for glt0-rbusa,
CURRENCY FOR GLT0-RTCUR.
SELECTION-SCREEN END OF BLOCK BL1.
SELECTION-SCREEN BEGIN OF BLOCK BL2 WITH FRAME TITLE TEXT-BL2.
PARAMETERS:
P_UNIX AS CHECKBOX, "Check box for Unix Option
P_UNFIL LIKE RLGRAP-FILENAME, " Unix file Dnload file name
default '/var/opt/arch/extract/GLT0.ASC', "- BG6661-070212
P_PCFILE AS CHECKBOX, "Check box for Local PC download.
P_PCFIL LIKE RLGRAP-FILENAME " PC file Dnload file name
default 'C:\GLT0.ASC'. "- BG6661-070212
DEFAULT 'C:\glt0_gl_balance_all.asc'. "+ BG6661-070212
SELECTION-SCREEN END OF BLOCK BL2.
*eject
Initialization. *
INITIALIZATION.
Try to default download filename
p_pcfil = c_pcfile.
p_unfil = c_unixfile.
if sy-sysid eq c_n01.
p_unfil = c_unixfile.
endif.
if sy-sysid eq c_g21.
p_unfil = c_g21_unixfile.
endif.
if sy-sysid eq c_g9d.
p_unfil = c_g9d_unixfile.
endif.
Default for download filename
*{ Begin of BG6661-070212
CONCATENATE C_UNIXFILE
SY-SYSID C_FSLASH C_CHRON C_FILENAME INTO P_UNFIL.
*} End of BG6661-070212
AT SELECTION-SCREEN OUTPUT.
loop at screen.
if screen-name = 'P_PCFIL'. "PC FILE
screen-input = '0'.
modify screen.
endif.
if screen-name = 'P_UNFIL'. "UN FILE
screen-input = '0'.
modify screen.
endif.
endloop.
if w_first = 'Y'.
perform path_file.
w_first = 'N'.
endif.
if w_cfirst = 'Y'.
perform cpath_file.
w_cfirst = 'N'.
endif.
Start-of-Selection. *
START-OF-SELECTION.
*COLLECT DATA
PERFORM COLLECT_DATA.
*BUILD FILENAMES
PERFORM BUILD_FILES.
*LOCAL
IF P_PCFILE = C_YES.
PERFORM LOCAL_DOWNLOAD.
ENDIF.
*UNIX
IF P_UNIX = C_YES.
PERFORM UNIX_DOWNLOAD.
ENDIF.
IF P_PCFILE IS INITIAL AND P_UNIX IS INITIAL.
MESSAGE I000(ZL) WITH 'Down load flags both are unchecked'.
ENDIF.
END-OF-SELECTION.
IF P_PCFILE = C_YES.
WRITE :/ 'PC File' , C_UNDER, P_PCFIL.
ENDIF.
*& Form DOWNLOAD
Download *
FORM DOWNLOAD.
P_PCFIL = W_PATH.
DATA LIN TYPE I.
DESCRIBE TABLE I_REC2 LINES LIN.
WRITE:/ 'No of Records downloaded = ',LIN.
CALL FUNCTION 'WS_DOWNLOAD'
EXPORTING
FILENAME = P_PCFIL
FILETYPE = C_ASC "c_dat "dat
TABLES
DATA_TAB = I_REC2 " t_str
fieldnames = t_strhd
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_WRITE_ERROR = 2
INVALID_FILESIZE = 3
INVALID_TABLE_WIDTH = 4
INVALID_TYPE = 5
NO_BATCH = 6
UNKNOWN_ERROR = 7
OTHERS = 8.
IF SY-SUBRC EQ 0.
ENDIF.
ENDFORM.
*& Form WRITE_TO_SERVER
text *
--> p1 text
<-- p2 text
FORM WRITE_TO_SERVER.
DATA : L_MSG(100) TYPE C,
L_LINE(5000) TYPE C.
P_UNFIL = W_UPATH.
DATA LIN TYPE I.
DESCRIBE TABLE I_REC2 LINES LIN.
WRITE:/ 'No of Records downloaded = ',LIN.
OPEN DATASET P_UNFIL FOR OUTPUT IN TEXT MODE. " message l_msg.
IF SY-SUBRC <> 0.
WRITE: / L_MSG.
ENDIF.
perform header_text1.
LOOP AT I_REC2.
TRANSFER I_REC2 TO P_UNFIL.
ENDLOOP.
CLOSE DATASET P_UNFIL.
WRITE : / C_TEXT , W_UPATH.
SPLIT W_UNIXFILE AT C_DOT INTO W_UNIXFIL1 W_UEXT.
CLEAR W_UPATH.
IF NOT W_UEXT IS INITIAL.
CONCATENATE W_UNIXFIL1 C_DOT W_UEXT INTO W_UPATH.
ELSE.
W_UEXT = C_ASC. " c_csv.
CONCATENATE W_UNIXFIL1 C_DOT W_UEXT INTO W_UPATH.
ENDIF.
ENDFORM. " WRITE_TO_SERVER
*& Form BUILD_FILES
FORM BUILD_FILES.
IF P_PCFILE = C_YES.
W_PCFILE = P_PCFIL.
***Split path at dot**
SPLIT W_PCFILE AT C_DOT INTO W_PCFILE1 W_EXT.
IF NOT W_EXT IS INITIAL.
CONCATENATE W_PCFILE1 C_DOT W_EXT INTO W_PATH.
ELSE.
W_PATH = W_PCFILE1.
ENDIF.
ENDIF.
IF P_UNIX = C_YES.
W_UNIXFILE = P_UNFIL.
SPLIT W_UNIXFILE AT C_DOT INTO W_UNIXFIL1 W_UEXT.
IF NOT W_UEXT IS INITIAL.
CONCATENATE W_UNIXFIL1 C_DOT W_UEXT INTO W_UPATH.
ELSE.
W_UPATH = W_UNIXFIL1.
ENDIF.
ENDIF.
ENDFORM.
FORM CPATH_FILE.
CLEAR P_PCFIL.
CONCATENATE C_PCFILE
C_COMFILE SY-SYSID C_UNDER SY-DATUM SY-UZEIT
C_DOT C_ASC INTO P_PCFIL.
ENDFORM. " CPATH_FILE
FORM PATH_FILE.
CLEAR P_UNFIL.
if sy-sysid eq c_n01.
CONCATENATE C_UNIXFILE
C_COMFILE SY-SYSID C_UNDER SY-DATUM SY-UZEIT
C_DOT C_ASC INTO P_UNFIL.
endif.
if sy-sysid eq c_g21.
concatenate c_g21_unixfile
c_comfile sy-sysid c_under sy-datum sy-uzeit
c_dot c_asc into p_unfil.
endif.
if sy-sysid eq c_g9d.
concatenate c_g9d_unixfile
c_comfile sy-sysid c_under sy-datum sy-uzeit
c_dot c_asc into p_unfil.
endif.
ENDFORM. " PATH_FILE
Local_Download *
Local *
FORM LOCAL_DOWNLOAD.
perform header_text.
LOOP AT REC1.
REC1-CLNT = SY-MANDT.
REC1-S_SYS = SY-SYSID.
MOVE: REC1-BAL TO I_REC2-BAL,
REC1-COAREA TO I_REC2-COAREA,
REC1-CA TO I_REC2-CA,
REC1-KTOPL TO I_REC2-CA,
REC1-CCODE TO I_REC2-CCODE,
REC1-CREDIT TO I_REC2-CREDIT,
REC1-CURRENCY TO I_REC2-CURRENCY,
REC1-CURTYPE TO I_REC2-CURTYPE,
REC1-DEBIT TO I_REC2-DEBIT,
REC1-YEAR TO I_REC2-YEAR,
REC1-FY TO I_REC2-FY,
REC1-ACCOUNT TO I_REC2-ACCOUNT,
REC1-VER TO I_REC2-VER,
REC1-VTYPE TO I_REC2-VTYPE,
REC1-CLNT TO I_REC2-CLNT,
REC1-S_SYS TO I_REC2-S_SYS,
REC1-INDICATOR TO I_REC2-INDICATOR.
APPEND I_REC2.
CLEAR I_REC2.
ENDLOOP.
IF NOT I_REC2[] IS INITIAL.
PERFORM DOWNLOAD .
CLEAR I_REC2.
REFRESH I_REC2.
ELSE.
WRITE : / ' no record exist due to unavailability of data'.
ENDIF.
ENDFORM. " LOCAL_DOWNLOAD
*& Form UNIX_DOWNLOAD
FORM UNIX_DOWNLOAD.
LOOP AT REC1.
REC1-CLNT = SY-MANDT.
REC1-S_SYS = SY-SYSID.
MOVE: REC1-BAL TO I_REC2-BAL,
REC1-COAREA TO I_REC2-COAREA,
REC1-CA TO I_REC2-CA,
REC1-KTOPL TO I_REC2-CA,
REC1-CCODE TO I_REC2-CCODE,
REC1-CREDIT TO I_REC2-CREDIT,
REC1-CURRENCY TO I_REC2-CURRENCY,
REC1-CURTYPE TO I_REC2-CURTYPE,
REC1-DEBIT TO I_REC2-DEBIT,
REC1-YEAR TO I_REC2-YEAR,
REC1-FY TO I_REC2-FY,
REC1-ACCOUNT TO I_REC2-ACCOUNT,
REC1-VER TO I_REC2-VER,
REC1-VTYPE TO I_REC2-VTYPE,
SY-MANDT TO I_REC2-CLNT,
SY-SYSID TO I_REC2-S_SYS,
REC1-INDICATOR TO I_REC2-INDICATOR.
APPEND I_REC2.
CLEAR I_REC2.
ENDLOOP.
IF NOT I_REC2[] IS INITIAL.
PERFORM WRITE_TO_SERVER.
CLEAR I_REC2.
REFRESH I_REC2.
ELSE.
WRITE : / ' no record exist due to unavailability of data'.
ENDIF.
ENDFORM. " UNIX_DOWNLOAD
*& Form HEADER_TEXT
text *
--> p1 text
<-- p2 text
*form header_text.
concatenate c_bal c_ba c_ca c_cc c_credit c_currency c_curtype
c_debit c_fisyear c_fisvar c_acct c_ver c_vtype c_indicator
into t_strhd
separated by c_comma.
append t_strhd.
*endform. " HEADER_TEXT
*& Form HEADER_TEXT1
text *
*form header_text1.
concatenate c_bal c_ba c_ca c_cc c_credit c_currency c_curtype
c_debit c_fisyear c_fisvar c_acct c_ver c_vtype c_indicator
into t_strhd1
separated by c_comma.
append t_strhd1.
transfer t_strhd1 to p_unfil.
*endform. " HEADER_TEXT1
*& Form COLLECT_DATA
Collect Data *
FORM COLLECT_DATA.
SELECT * FROM GLT0 INTO TABLE REC3
WHERE BUKRS IN COMPCODE
AND RYEAR IN FISYEAR
AND RPMAX IN FISCPER
AND RACCT IN GLACC
AND RTCUR IN CURRENCY.
SELECT KTOPL FROM SKA1
INTO TABLE T_KTOPL
FOR ALL ENTRIES IN REC3
WHERE SAKNR = REC3-RACCT.
LOOP AT REC3 .
select *
from glt0
into table t_temp
where rldnr = rec-rldnr
and rrcty = rec-rrcty
and rvers = rec-rvers
and bukrs = rec-bukrs
and ryear = rec-ryear
and racct = rec-racct
and rbusa = rec-rbusa
and rtcur <> 'ZAR'
and rpmax = rec-rpmax.
if sy-subrc = 0.
rec1-bal = '0.00'.
else.
rec1-bal = rec-hslvt.
endif.
*READ TABLE T_KTOPL WITH KEY SAKNR = REC-RACCT BINARY SEARCH.
MOVE T_KTOPL-KTOPL TO REC3-KTOPL.
CLEAR: CBALANCE, DBALANCE.
REC1-BAL = REC3-HSLVT.
IF REC3-DRCRK = 'S'.
IF REC3-HSLVT NE C_ZERO.
YEAR = REC-RYEAR.
PERIOD = '000'.
CONCATENATE PERIOD C_DOT YEAR INTO FP.
REC1-INDICATOR = REC-DRCRK.
REC1-DEBIT = C_ZERO.
REC1-CREDIT = C_ZERO.
REC1-CCODE = REC-BUKRS.
REC1-YEAR = FP.
REC1-CURRENCY = REC-RTCUR.
REC1-ACCOUNT = REC-RACCT.
rec1-bal = rec-hslvt.
dbalance = rec1-bal.
REC1-CURTYPE = C_CTYPE.
REC1-FY = C_FY.
REC1-COAREA = REC-RBUSA.
REC1-VER = REC-RVERS.
REC1-VTYPE = C_CTYPE.
REC1-CA = C_CHART.
APPEND REC1.
C = 0.
PERFORM D.
ENDIF.
IF REC3-HSL01 NE C_ZERO.
YEAR = REC3-RYEAR.
PERIOD = '001'.
CONCATENATE PERIOD C_DOT YEAR INTO FP.
REC1-INDICATOR = REC3-DRCRK.
REC1-DEBIT = REC3-HSL01 .
REC1-CCODE = REC3-BUKRS.
REC1-YEAR = FP.
REC1-CURRENCY = REC3-RTCUR.
REC1-ACCOUNT = REC3-RACCT.
rec1-bal = REC3-hsl01 + dbalance.
dbalance = rec1-bal.
REC1-CURTYPE = C_CTYPE.
REC1-FY = C_FY.
REC1-COAREA = REC3-RBUSA.
REC1-VER = REC3-RVERS.
REC1-VTYPE = C_CTYPE.
REC1-CA = C_CHART.
REC1-KTOPL = REC3-KTOPL.
APPEND REC1.
C = 1.
PERFORM D.
ENDIF.
IF REC3-HSL02 NE C_ZERO.
REC1-DEBIT = REC3-HSL02.
YEAR = REC3-RYEAR.
PERIOD = '002'.
CONCATENATE PERIOD C_DOT YEAR INTO FP.
REC1-INDICATOR = REC3-DRCRK.
REC1-DEBIT = REC3-HSL02.
REC1-CCODE = REC3-BUKRS.
REC1-YEAR = FP.
REC1-CURRENCY = REC3-RTCUR.
REC1-ACCOUNT = REC3-RACCT.
rec1-bal = REC3-hsl02 + dbalance.
dbalance = rec1-bal.
REC1-CURTYPE = C_CTYPE.
REC1-FY = C_FY.
REC1-COAREA = REC3-RBUSA.
REC1-VER = REC3-RVERS.
REC1-VTYPE = C_CTYPE.
REC1-CA = C_CHART. "-BF7957-070503
REC1-KTOPL = REC3-KTOPL. "+BF7957-070503
APPEND REC1.
C = 2.
PERFORM D.
ENDIF.
IF REC3-HSL03 NE C_ZERO.
YEAR = REC3-RYEAR.
PERIOD = '003'.
CONCATENATE PERIOD C_DOT YEAR INTO FP.
REC1-INDICATOR = REC3-DRCRK.
REC1-DEBIT = REC3-HSL03.
REC1-CCODE = REC3-BUKRS.
REC1-YEAR = FP.
REC1-CURRENCY = REC3-RTCUR.
REC1-ACCOUNT = REC3-RACCT.
rec1-bal = REC3-hsl03 + dbalance .
dbalance = rec1-bal.
REC1-CURTYPE = C_CTYPE.
REC1-FY = C_FY.
REC1-COAREA = REC3-RBUSA.
REC1-VER = REC3-RVERS.
REC1-VTYPE = C_CTYPE.
REC1-CA = C_CHART. "-BF7957-070503
REC1-KTOPL = REC3-KTOPL. "+BF7957-070503
APPEND REC1.
C = 3.
PERFORM D.
ENDIF.
IF REC3-HSL04 NE C_ZERO.
REC1-DEBIT = REC3-HSL04.
YEAR = REC3-RYEAR.
PERIOD = '004'.
CONCATENATE PERIOD C_DOT YEAR INTO FP.
REC1-INDICATOR = REC3-DRCRK.
REC1-DEBIT = REC3-HSL04.
REC1-CCODE = REC3-BUKRS.
REC1-YEAR = FP.
REC1-CURRENCY = REC3-RTCUR.
REC1-ACCOUNT = REC3-RACCT.
rec1-bal = REC3-hsl04 + dbalance .
REC1-CURTYPE = C_CTYPE.
REC1-FY = C_FY.
REC1-COAREA = REC3-RBUSA.
REC1-VER = REC3-RVERS.
REC1-VTYPE = C_CTYPE.
REC1-CA = C_CHART. "-BF7957-070503
REC1-KTOPL = REC3-KTOPL. "+BF7957-070503
APPEND REC1.
dbalance = rec1-bal.
C = 4.
PERFORM D.
ENDIF.
Thanks and Regards,
Ramuse logical database SDF, nodes ska1 and skc1c
A. -
SKB1 table is not populated - Bulgaria country version
Hi,
Our team is working on installation country version for Bulgaria.
We have ECC 6.0 and SAPK-11023INCCEE ECC Core Country versions for EEM Country (Romania country localization).
We followed the instructions of SAP note 856949 "Enhancements for mySAP R/3 46C and
higher for Bulgaria" and we face the following problem:
Table SKB1 for BG01 company code is not populated, but we have data in SKA1, SKAT tables (charts of accounts CABG).
Although we imported CT_BG_160108.zip , CT_SKB1_table_content.zip , still SKB1 table does not contain data for BG01 - company code.
Please help us to solve this problem.
Moderator: I'd suggest to open OSS message on this problem, if you are sure that you maintain the data for company codeHello,
Here are a few hints:
- In the operator, check the steps of your interface. Are rows inserted in I$ tables ?
- If you use Flow Control, does your E$ tables contains something ?
- Check the SQL code generated in the integration step, extract the Select from it and execute it using sqlplus, SQLDeveloper or whatever you want. Does it return something ? Maybe there is a wrong filter...
If you don't find the error, tell us which IKM you use and give us the generated code of the step which returns no data.
Hope it helps -
In Answers am seeing "Folder is Empty" for Logical Fact and Dimension Table
Hi All,
Am working on OBIEE Answers, on of sudden when i clicked on Logical Fact table it showed me as "folder is empty". I restarted all the services and then tried still showing same for Logical Fact and Dimension tables but am able to see all my reports in Shared Folders. I restarted the machine too but no change. Please help me out to resolve this issue.
Thanks in Advance.
Regards,
Rajkumar.First of all, follow the forum etiquette :
http://forums.oracle.com/forums/ann.jspa?annID=939
React or mark as anwser the post that the user gave.
And for your question, you must check the log for a possible corrupt catalog :
OracleBIData_Home\web\log\sawlog0.log -
No data in cube,but there is data in F and E table, also in dimension table
Hi, gurus,
I successfully loaded more than 100,000 records of data into an infocube, but when I check the content of the infocube(without any restriction), unexpectedly no data displayed, also after I checked the F and E table, still no data, but there is data in dimendsion tables. what happened?
please help. thanks a lot.Hi Kulun Rao,
Try the following options.
Check in the selection screen
Go to infocube through LISTCUBE t.code
Check the compression and roll-up statuses.
Check the reporting status in request tab.
Refresh the cube ( u already do it)
Check the index status in Request tab**
might be these things will help u.
Regards,
HARI GUPTA -
Relation b/w PRPS and RPSCO tables
Hi PS Experts,
What is the Relation b/w PRPS and RPSCO tables
PRPS : WBS (Work Breakdown Structure) Element Master Data
RPSCO : Project info database: Costs, revenues, finances
Thanks in Advance,
Regards,
sudharsan.Hello Sasikanth Thank u for ur information,
and i need some more information, am working on abap, now i got a requirement in PS,
i want 2 calculate the values like Planned Revenue, Planned Cost, Actuall Rev, Actuall Cost, Actuall Billing, Revenue Taken, Cost Taken, BNS and SNB.
based on i/p parameters: Company Code, Fiscal year and month.
pls let me know how to calculate the above values, its very urgent for me.
Thanks in advance,
sudharsan. -
Performance issue in BI due to direct query on BKPF and BSEG tables
Hi,
We had a requirement that FI document number fieldshould be extracted in BI.
Following code was written which has the correct logic but performance is bad.
It fetched just 100 records in more than 4-5 hrs.
The reason is there was a direct qury written on BSEG and BKPF tables(without WHERE clause).
Is there any way to improve this code like adding GJAHR field in where clause? I dont want to change the logic.
Following is the code:
WHEN '0CO_OM_CCA_9'." Data Source
TYPES:BEGIN OF ty_bkpf,
belnr TYPE bkpf-belnr,
xblnr TYPE bkpf-xblnr,
bktxt TYPE bkpf-bktxt,
awkey TYPE bkpf-awkey,
bukrs TYPE bkpf-bukrs,
gjahr TYPE bkpf-gjahr,
AWTYP TYPE bkpf-AWTYP,
END OF ty_bkpf.
TYPES : BEGIN OF ty_bseg1,
lifnr TYPE bseg-lifnr,
belnr TYPE bseg-belnr,
bukrs TYPE bseg-bukrs,
gjahr TYPE bseg-gjahr,
END OF ty_bseg1.
DATA: it_bkpf TYPE STANDARD TABLE OF ty_bkpf,
wa_bkpf TYPE ty_bkpf,
it_bseg1 TYPE STANDARD TABLE OF ty_bseg1,
wa_bseg1 TYPE ty_bseg1,
l_s_icctrcsta1 TYPE icctrcsta1.
"Extract structure for Datasoure 0co_om_cca_9.
DATA: l_awkey TYPE bkpf-awkey.
DATA: l_gjahr1 TYPE gjahr.
DATA: len TYPE i,
l_cnt TYPE i.
l_cnt = 10.
tables : covp.
data : ref_no(20).
SELECT lifnr
belnr
bukrs
gjahr
FROM bseg
INTO TABLE it_bseg1.
DELETE ADJACENT DUPLICATES FROM it_bseg1 COMPARING belnr gjahr .
SELECT belnr
xblnr
bktxt
awkey
bukrs
gjahr
AWTYP
FROM bkpf
INTO TABLE it_bkpf.
IF sy-subrc EQ 0.
CLEAR: l_s_icctrcsta1,
wa_bkpf,
l_awkey,
wa_bseg1.
LOOP AT c_t_data INTO l_s_icctrcsta1.
MOVE l_s_icctrcsta1-fiscper(4) TO l_gjahr1.
select single AWORG AWTYP INTO CORRESPONDING FIELDS OF COVP FROM COVP
WHERE belnr = l_s_icctrcsta1-belnr.
if sy-subrc = 0.
if COVP-AWORG is initial.
concatenate l_s_icctrcsta1-refbn '%' into ref_no.
READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(10) =
l_s_icctrcsta1-refbn
awtyp = COVP-AWTYP
gjahr = l_gjahr1.
IF sy-subrc EQ 0.
MOVE wa_bkpf-belnr TO l_s_icctrcsta1-zzbelnr.
MOVE wa_bkpf-xblnr TO l_s_icctrcsta1-zzxblnr.
MOVE wa_bkpf-bktxt TO l_s_icctrcsta1-zzbktxt.
MODIFY c_t_data FROM l_s_icctrcsta1.
READ TABLE it_bseg1 INTO wa_bseg1
WITH KEY
belnr = wa_bkpf-belnr
bukrs = wa_bkpf-bukrs
gjahr = wa_bkpf-gjahr.
IF sy-subrc EQ 0.
MOVE wa_bseg1-lifnr TO l_s_icctrcsta1-lifnr.
MODIFY c_t_data FROM l_s_icctrcsta1.
CLEAR: l_s_icctrcsta1,
wa_bseg1,
l_gjahr1.
ENDIF.
ENDIF.
ELSE. " IF AWORG IS NOT BLANK -
concatenate l_s_icctrcsta1-refbn COVP-AWORG into ref_no.
READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(20) =
ref_no
awtyp = COVP-AWTYP
gjahr = l_gjahr1.
IF sy-subrc EQ 0.
MOVE wa_bkpf-belnr TO l_s_icctrcsta1-zzbelnr.
MOVE wa_bkpf-xblnr TO l_s_icctrcsta1-zzxblnr.
MOVE wa_bkpf-bktxt TO l_s_icctrcsta1-zzbktxt.
MODIFY c_t_data FROM l_s_icctrcsta1.
READ TABLE it_bseg1 INTO wa_bseg1
WITH KEY
belnr = wa_bkpf-belnr
bukrs = wa_bkpf-bukrs
gjahr = wa_bkpf-gjahr.
IF sy-subrc EQ 0.
MOVE wa_bseg1-lifnr TO l_s_icctrcsta1-lifnr.
MODIFY c_t_data FROM l_s_icctrcsta1.
CLEAR: l_s_icctrcsta1,
wa_bseg1,
l_gjahr1.
ENDIF.
ENDIF.
endif.
endif.
CLEAR: l_s_icctrcsta1.
CLEAR: COVP, REF_NO.
ENDLOOP.
ENDIF.Hello Amruta,
I was just looking at your coding:
LOOP AT c_t_data INTO l_s_icctrcsta1.
MOVE l_s_icctrcsta1-fiscper(4) TO l_gjahr1.
select single AWORG AWTYP INTO CORRESPONDING FIELDS OF COVP FROM COVP
WHERE belnr = l_s_icctrcsta1-belnr.
if sy-subrc = 0.
if COVP-AWORG is initial.
concatenate l_s_icctrcsta1-refbn '%' into ref_no.
READ TABLE it_bkpf INTO wa_bkpf WITH KEY awkey(10) =
l_s_icctrcsta1-refbn
awtyp = COVP-AWTYP
gjahr = l_gjahr1.
Here you are interested in those BKPF records that are related to the contents of c_t_data internal table.
I guess that this table does not contain millions of entries. Am I right?
If yes, the the first step would be to pre-select COVP entries:
select BELNR AWORG AWTYP into lt_covp from COVP
for all entries in c_t_data
where belnr = c_t_data-belnr.
sort lt_covp by belnr.
Once having this data ready, you build an internal table for BKPF selection:
LOOP AT c_t_data INTO l_s_icctrcsta1.
clear ls_bkpf_sel.
ls_bkpf_sel-awkey(10) = l_s_icctrcsta1-refbn.
read table lt_covp with key belnr = l_s_icctrcsta1-belnr binary search.
if sy-subrc = 0.
ls_bkpf_sel-awtyp = lt_covp-awtyp.
endif.
ls_bkpf_sel-gjahr = l_s_icctrcsta1-fiscper(4).
insert ls_bkpf_sel into table lt_bkpf_sel.
ENDLOOP.
Now you have all necessary info to read BKPF:
SELECT
belnr
xblnr
bktxt
awkey
bukrs
gjahr
AWTYP
FROM bkpf
INTO TABLE it_bkpf
for all entries in lt_bkpf_sel
WHERE
awkey = lt_bkpf_sel-awkey and
awtyp = lt_bkpf_sel-awtype and
gjahr = lt_bkpf_sel-gjahr.
Then you can access BSEG with the bukrs, belnr and gjahr from the selected BKPF entries. This will be fast.
Moreover I would even try to make a join on DB level. But first try this solution.
Regards,
Yuri
Maybe you are looking for
-
Lync 2013 Client takes a long time to "restore" from taskbar
On many Windows 8 & 8.1 systems in our environment the Lync client will take anywhere from 2-10 seconds to pop up when clicked on in the taskbar. The application is running, logged in, and active. This is very frustrating to our userbase as they wi
-
How do I remove automatic margins when using list tags?
Hi there, i'm trying to create a list of links on my webpage using Dreamweaver CS4. I've created a .li tag using CSS so that no bullets/numbers show up and i've then created .ul a:link, ul a:hover etc compound tags to make all list items into rollove
-
The sent file does not retain the sent email
When finished writing an email and sending it to the addressee the file normally loads into the sent folder of the email folder. It does not load in and the folder remains blank. Another email folder with a different email name works fine. Any sugges
-
Delivery rating in vendor evaluation
Can I know on what logic system calculates the delivery rating, for schedule agreement which has more than one schedule lines. Regards, M.M
-
Error During Activation while activating Report Selection Texts
Hi, I've developed a report and the Transport Request for the Report has been migrated to Quality and Production Systems. Now, I have to make some more changes where I had to create another request. Now, while activating the Selection Texts for