10000 records Table giving problem ... ?
Hello All,
I am using oracle database, windows 2000, OC4J(Oracle Containers for j2ee) Application Server. When I query a emp table which has 100000 records and store the 100000 records in resultset and using a while loop
if I iterates through the resultset to store each emp record in emp bean array then my application hangs at this point so I am unable to form a emp bean array of 100000 records. What is the reason ?
Also, I am getting 'OutOfMemoryError' exception at the client side(jsp) when I am trying iterate over a vector that stores each of 100000 emp records as inner vector. I tried to increase my system page file size(virtual memory) to 1GB but this didnt help. So how to avoid this error in windows2000 and on unix box.
Thanks and Regards,
Kumar.
You might want to see the responses to this post by some other person who seems to have exactly the same problem as you.
Similar Messages
-
10000 records resultset giving problem
Hello All,
I am using oracle database, windows 2000, OC4J(Oracle Containers for j2ee) Application Server. When I query a emp table which has 100000 records and store the 100000 records in resultset and using a while loop
if I iterates through the resultset to store each emp record in emp bean array then my application hangs at this point so I am unable to form a emp bean array of 100000 records. What is the reason ?
Also, I am getting 'OutOfMemoryError' exception at the client side(jsp) when I am trying iterate over a vector that stores each of 100000 emp records as inner vector. I tried to increase my system page file size(virtual memory) to 1GB but this didnt help. So how to avoid this error in windows2000 and on unix box.
Thanks and Regards,
Kumar.I agree with all the above posts, you shouldn't never ever try to store 10K or more result objects in memory, let alone generate a web page containing all of those results. Filter it already in the database level, or if you'll showing unfiltered data, give the user a scrollable "window" into the results, just as pretty much every web page does.
If the data is pretty homogenous, you might even try using a pattern like Flyweight from GoF.
Anyway, if for some ungodly reason you decide to do it, try increasing the heap size of your JVM with the -Xms and -Xmx parameters. For example java -Xms100m -Xmx512m will set your minimum heap size into 100 megabytes and maximum into 512 megabytes.
.P. -
100000 records resultset giving problem ?
Hello All,
I am using oracle database, windows 2000, OC4J(Oracle Containers for j2ee) Application Server. When I query a emp table which has 100000 records and store the 100000 records in resultset and using a while loop
if I iterates through the resultset to store each emp record in emp bean array then my application hangs at this point so I am unable to form a emp bean array of 100000 records. What is the reason ?
Also, I am getting 'OutOfMemoryError' exception at the client side(jsp) when I am trying iterate over a vector that stores each of 100000 emp records as inner vector. I tried to increase my system page file size(virtual memory) to 1GB but this didnt help. So how to avoid this error in windows2000 and on unix box.
Thanks and Regards,
Kumar.I'd guess the reason is memory related. The simple answer to your question is "don't do that!" Why would the user ever want to look at 100,000 records? If the result of a user request would give such a large number of hits, your app should ask the user to refine the search criteria.
Even if browsing 100,000 rows really is a requirement (which seems doubtful), why read them all at one pass? You could store a reasonable number (e.g. 1,000) and request a new read every 1,000 records. Alternatively, store the 100,000 in a disk file and page from there. -
100000 records resuletset giving problem ?
Hello All,
I am using oracle database, windows 2000, OC4J(Oracle Containers for j2ee) Application Server. When I query a emp table which has 100000 records and store the 100000 records in resultset and using a while loop
if I iterates through the resultset to store each emp record in emp bean array then my application hangs at this point so I am unable to form a emp bean array of 100000 records. What is the reason ?
Also, I am getting 'OutOfMemoryError' exception at the client side(jsp) when I am trying iterate over a vector that stores each of 100000 emp records as inner vector. I tried to increase my system page file size(virtual memory) to 1GB but this didnt help. So how to avoid this error in windows2000 and on unix box.
Thanks and Regards,
Kumar.I think you need to close any connection to the db, before starting your while loop,
while loop takes long time than the "for" loop, now while you are in "while" loop the dbConn is opened, this will couse out of memory when others trying to open new db conn,
maybe you need to move your ResultSet to ann other object like Vector of class objects, then close the conn.
hope this will help -
Table control from internal table giving problem.
HI ALL,
I am creating a table control using wizard from internal table in the program when i give work area it gives this error "The table work area G_TABC_WA does not exist or is not a structure"
I have declared internal table and work like this.
TYPES: BEGIN OF T_TABC,
OPTID LIKE ZHRPMT_TRNSAC-OPTID,
STGID LIKE ZHRPMT_TRNSAC-STGID,
TETID LIKE ZHRPMT_TRNSAC-TETID,
REQSR LIKE ZHRPMT_TRNSAC-REQSR,
MUNIT LIKE ZHRPMT_TRNSAC-MUNIT,
END OF T_TABC.
DATA: G_TABC_ITAB TYPE T_TABC OCCURS 0,
G_TABC_WA TYPE T_TABC. "work area
why it is giving this error and how to correct this.
thanks.Hay
Hello there is very small mistake
TYPES: BEGIN OF t_tabc,
optid TYPE zhrpmt_trnsac-optid,
stgid TYPE zhrpmt_trnsac-stgid,
tetid TYPE zhrpmt_trnsac-tetid,
reqsr TYPE zhrpmt_trnsac-reqsr,
munit TYPE zhrpmt_trnsac-munit,
END OF t_tabc.
DATA: g_tabc_itab TYPE TABLE OF t_tabc ,
g_tabc_wa LIKE t_tabc. "work area
**Please reward suitable points***
With Regards
Navin Khedikar -
Lock table overflow problem in transaction sm58
Hi ,
I have a file to idoc scenario.I am on xi 7.0 sp09.
I am posting about 10000 records at one time in a 1 mb file.
In idx5 i am able to idocs .
However in transaction SM58 I am seeing Lock table overflow error .
Regards ,
DeepakDeepak,
Here you can solve the problem in two ways
1) increase the size of the lock table via parameter
enque/table_size
2) or increase the enque work processes from 1 to 2 or 3
via parameter rdisp/wp_no_enq
Please have a look on SAP Note: 928044.
---Satish -
RSA3 is extracting fine but in BI site Its extracting only 10000 records
Hi Experts,
h4.
I made Generic data-source based on function module. When I am extracting data from this data-source in RSA3 itu2019s fetching all records. Its nearly are 7 lacks records.
h4.
But after replicating this data-source in BI site when I am extracting data from this data source its fetching only 10000 records. What mistake I might be did here or is there is any setting for FM based data source.
Advance thanksMy Issue is in RSA3 datasource is working fine and giving all records but when i am extracting data from same data source in BIW site its giving maximum 10000 records. I am using full load with n selection parameter.
This datasource is based on function module. below is the code of data-source
. * Auxiliary Selection criteria structure
DATA: L_S_SELECT TYPE SRSC_S_SELECT.
Maximum number of lines for DB table
STATICS: S_S_IF TYPE SRSC_S_IF_SIMPLE,
counter
S_COUNTER_DATAPAKID LIKE SY-TABIX,
cursor
S_CURSOR TYPE CURSOR.
STATICS: V1 TYPE I,
V2 TYPE I.
TYPES: BEGIN OF ty_data,
matnr TYPE matnr,
MTART TYPE MTART,
PARTCODE TYPE IDNRK,
WERKS TYPE WERKS_D,
quantity TYPE BGESWERT,
vrkme TYPE vrkme,
END OF ty_data,
BEGIN OF ty_parts,
mandt TYPE mandt,
matnr TYPE matnr,
werks TYPE werks_d,
spdpartcode TYPE matnr,
partcode TYPE matnr,
parttext TYPE maktx,
quantity TYPE BGESWERT,
vrkme TYPE vrkme,
END OF ty_parts,
BEGIN OF ty_partvalue,
mandt TYPE mandt,
regio TYPE regio,
spdpartcode TYPE matnr,
value TYPE abgergeb,
waers TYPE waers,
END OF ty_partvalue.
DATA: it_mara TYPE STANDARD TABLE OF mara,
wa_mara TYPE mara,
it_data TYPE STANDARD TABLE OF ty_data,
wa_data TYPE ty_data,
it_t005s TYPE STANDARD TABLE OF t005s,
wa_t005s TYPE t005s,
l_index TYPE sy-tabix,
it_parts TYPE STANDARD TABLE OF ty_parts,
wa_parts TYPE ty_parts,
it_partvalue TYPE STANDARD TABLE OF ty_partvalue,
wa_partvalue TYPE ty_partvalue,
wa_makl TYPE mkal.
DATA: it_t001w TYPE STANDARD TABLE OF t001w,
wa_t001w TYPE t001w.
DATA: it_stb TYPE STANDARD TABLE OF stpox,
wa_stb TYPE stpox,
it_stb1 TYPE STANDARD TABLE OF stpox,
wa_stb1 TYPE stpox,
it_stb2 TYPE STANDARD TABLE OF stpox,
wa_stb2 TYPE stpox,
it_stb3 TYPE STANDARD TABLE OF stpox,
wa_stb3 TYPE stpox.
DATA: it_bom type standard table of EXTRACT_STRUCT,
wa_bom type EXTRACT_STRUCT.
Select ranges
ranges: L_R_MATNR for MARA-MATNR,
L_R_MTART for MARA-MTART.
Initialization mode (first call by SAPI) or data transfer mode
(following calls) ?
IF I_INITFLAG = SBIWA_C_FLAG_ON.
Initialization: check input parameters
buffer input parameters
prepare data selection
Check DataSource validity
CASE I_DSOURCE.
WHEN 'Z_BOM'.
WHEN OTHERS.
IF 1 = 2. MESSAGE E009(R3). ENDIF.
this is a typical log call. Please write every error message like this
LOG_WRITE 'E' "message type
'R3' "message class
'009' "message number
I_DSOURCE "message variable 1
' '. "message variable 2
RAISE ERROR_PASSED_TO_MESS_HANDLER.
ENDCASE.
APPEND LINES OF I_T_SELECT TO S_S_IF-T_SELECT.
Fill parameter buffer for data extraction calls
S_S_IF-REQUNR = I_REQUNR.
S_S_IF-DSOURCE = I_DSOURCE.
S_S_IF-MAXSIZE = I_MAXSIZE.
Fill field list table for an optimized select statement
(in case that there is no 1:1 relation between InfoSource fields
and database table fields this may be far from beeing trivial)
APPEND LINES OF I_T_FIELDS TO S_S_IF-T_FIELDS.
ELSE. "Initialization mode or data extraction ?
Data transfer: First Call OPEN CURSOR + FETCH
Following Calls FETCH only
First data package -> OPEN CURSOR
IF S_COUNTER_DATAPAKID = 0.
Fill range tables BW will only pass down simple selection criteria
of the type SIGN = 'I' and OPTION = 'EQ' or OPTION = 'BT'.
LOOP AT S_S_IF-T_SELECT INTO L_S_SELECT WHERE FIELDNM = 'MATNR'.
MOVE-CORRESPONDING L_S_SELECT TO L_R_MATNR.
APPEND L_R_MATNR.
ENDLOOP.
LOOP AT S_S_IF-T_SELECT INTO L_S_SELECT WHERE FIELDNM = 'MTART'.
MOVE-CORRESPONDING L_S_SELECT TO L_R_MTART.
APPEND L_R_MTART.
ENDLOOP.
Determine number of database records to be read per FETCH statement
from input parameter I_MAXSIZE. If there is a one to one relation
between DataSource table lines and database entries, this is trivial.
In other cases, it may be impossible and some estimated value has to
be determined.
SELECT werks INTO CORRESPONDING FIELDS OF TABLE it_t001w
FROM t001w
WHERE j_1bbranch NE '' AND
pkosa = 'X'.
DELETE it_t001w WHERE werks = 'HSPG'.
SELECT matnr INTO CORRESPONDING FIELDS OF TABLE it_mara
FROM mara
WHERE matnr IN L_R_MATNR AND
mtart IN L_R_MTART AND
lvorm = '' AND
mstae = ''.
IF SY-SUBRC = 0.
REFRESH: it_data.
LOOP AT it_mara INTO wa_mara.
LOOP AT it_t001w INTO wa_t001w.
SELECT SINGLE * INTO wa_makl
FROM mkal
WHERE matnr = wa_mara-matnr AND
werks = wa_t001w-werks.
IF sy-subrc = 0.
REFRESH: it_stb2.
CALL FUNCTION 'CS_BOM_EXPL_MAT_V2'
EXPORTING
capid = 'PP01'
datuv = sy-datum
mktls = 'X'
mehrs = 'X'
mmory = '1'
mtnrv = wa_mara-matnr
svwvo = 'X'
werks = wa_t001w-werks
werks = 'HSPG'
vrsvo = 'X'
TABLES
stb = it_stb2
EXCEPTIONS
alt_not_found = 1
call_invalid = 2
material_not_found = 3
missing_authorization = 4
no_bom_found = 5
no_plant_data = 6
no_suitable_bom_found = 7
conversion_error = 8
OTHERS = 9.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ELSE.
IF it_stb2 IS NOT INITIAL.
it_stb1 = it_stb2.
IT_STB1 will have first level components
DELETE it_stb1 WHERE stufe NE 1.
IT_STB2 will have second and higher level components
DELETE it_stb2 WHERE stufe EQ 1.
CLEAR: lv_flag.
LOOP AT it_stb1 INTO wa_stb1.
l_index = sy-tabix.
LOOP AT it_stb2 INTO wa_stb2 WHERE vwegx = wa_stb1-wegxx.
APPEND wa_stb2 TO it_stb3.
lv_flag = 'X'.
ENDLOOP.
IF lv_flag = 'X'.
DELETE it_stb1 INDEX l_index.
CLEAR: lv_flag.
ENDIF.
ENDLOOP.
IT_STB2 will have only second level components
it_stb2 = it_stb3.
APPEND LINES OF it_stb1 TO it_stb2.
APPEND LINES OF it_stb2 TO it_stb.
REFRESH: it_stb1, it_stb2, it_stb3.
ENDIF.
ENDIF.
ENDIF.
ENDLOOP.
LOOP AT it_stb INTO wa_stb.
wa_data-matnr = wa_mara-matnr.
wa_data-mtart = wa_stb-mtart.
wa_data-werks = wa_stb-werks.
wa_data-partcode = wa_stb-idnrk.
wa_data-quantity = wa_stb-MENGE.
wa_data-vrkme = wa_stb-MEINS.
APPEND wa_data TO it_data.
ENDLOOP.
REFRESH: it_stb.
ENDLOOP.
ENDIF.
V1 = 1.
V2 = S_S_IF-MAXSIZE.
ENDIF.
Fetch records into interface table.
named E_T_'Name of extract structure'.
FETCH NEXT CURSOR S_CURSOR
APPENDING CORRESPONDING FIELDS
OF TABLE E_T_DATA
PACKAGE SIZE S_S_IF-MAXSIZE.
IF SY-SUBRC <> 0.
CLOSE CURSOR S_CURSOR.
RAISE NO_MORE_DATA.
ENDIF.
CLEAR: wa_data.
LOOP AT it_data INTO wa_data.
IF SY-TABIX GE V1 AND SY-TABIX LE V2.
APPEND wa_data TO E_T_DATA.
CLEAR wa_data.
ENDIF.
ENDLOOP.
S_COUNTER_DATAPAKID = S_COUNTER_DATAPAKID + 1.
IF E_T_DATA[] IS INITIAL.
RAISE NO_MORE_DATA.
ENDIF.
V1 = V2 + 1.
V2 = V2 + S_S_IF-MAXSIZE.
ENDIF.
*} INSERT
ENDFUNCTION.
Edited by: damawat on Aug 26, 2011 2:20 PM -
Urgent: Error-Record 39,779, segment 0001 is not in the cross-record table
Hi Gurus,
This is an urgent production issue: I got the following error-
I am updating data records from a DSO to Infocube in delta mode,
1.Record 39,779, segment 0001 is not in the cross-record table
2.Error in substep: End Routine
I dont know problem is in the End Routine or somewhere else,
The End routine is this:
PROGRAM trans_routine.
CLASS routine DEFINITION
CLASS lcl_transform DEFINITION.
PUBLIC SECTION.
Attributs
DATA:
p_check_master_data_exist
TYPE RSODSOCHECKONLY READ-ONLY,
*- Instance for getting request runtime attributs;
Available information: Refer to methods of
interface 'if_rsbk_request_admintab_view'
p_r_request
TYPE REF TO if_rsbk_request_admintab_view READ-ONLY.
PRIVATE SECTION.
TYPE-POOLS: rsd, rstr.
Rule specific types
TYPES:
BEGIN OF tys_TG_1,
InfoObject: ZVEHICLE Unique Vehicle ID.
/BIC/ZVEHICLE TYPE /BIC/OIZVEHICLE,
InfoObject: ZLOCID Mine Site.
/BIC/ZLOCID TYPE /BIC/OIZLOCID,
InfoObject: ZLOCSL Location Storage Location.
/BIC/ZLOCSL TYPE /BIC/OIZLOCSL,
InfoObject: 0VENDOR Vendor.
VENDOR TYPE /BI0/OIVENDOR,
InfoObject: ZNOMTK Nomination Number.
/BIC/ZNOMTK TYPE /BIC/OIZNOMTK,
InfoObject: ZNOMIT Nomination Item.
/BIC/ZNOMIT TYPE /BIC/OIZNOMIT,
InfoObject: ZNOMNR Nomination number.
/BIC/ZNOMNR TYPE /BIC/OIZNOMNR,
InfoObject: ZVSTTIME Vehicle Starting Time Stamp.
/BIC/ZVSTTIME TYPE /BIC/OIZVSTTIME,
InfoObject: ZVEDTIME Vehicle Ending Time Stamp.
/BIC/ZVEDTIME TYPE /BIC/OIZVEDTIME,
InfoObject: ZNETWT Net Weight.
/BIC/ZNETWT TYPE /BIC/OIZNETWT,
InfoObject: TU_GRS_WG Gross Wgt.
/BIC/TU_GRS_WG TYPE /BIC/OITU_GRS_WG,
InfoObject: ZTU_TRE_W Tare Wgt.
/BIC/ZTU_TRE_W TYPE /BIC/OIZTU_TRE_W,
InfoObject: ZCUSTWT Customer Weight.
/BIC/ZCUSTWT TYPE /BIC/OIZCUSTWT,
InfoObject: ZCAR_NO Car Number.
/BIC/ZCAR_NO TYPE /BIC/OIZCAR_NO,
InfoObject: ZINBND_ID Train Consist Inbound ID.
/BIC/ZINBND_ID TYPE /BIC/OIZINBND_ID,
InfoObject: ZOTBND_ID Train Consist Return Load.
/BIC/ZOTBND_ID TYPE /BIC/OIZOTBND_ID,
InfoObject: 0SOLD_TO Sold-to Party.
SOLD_TO TYPE /BI0/OISOLD_TO,
InfoObject: 0CUSTOMER Customer Number.
CUSTOMER TYPE /BI0/OICUSTOMER,
InfoObject: 0SHIP_TO Ship-To Party.
SHIP_TO TYPE /BI0/OISHIP_TO,
InfoObject: ZVEHI_NO Vehicle Number.
/BIC/ZVEHI_NO TYPE /BIC/OIZVEHI_NO,
InfoObject: ZCARSTDAT Car Start Date.
/BIC/ZCARSTDAT TYPE /BIC/OIZCARSTDAT,
InfoObject: ZCAREDDAT Car End Date.
/BIC/ZCAREDDAT TYPE /BIC/OIZCAREDDAT,
InfoObject: ZCARSTTIM Car Start Time.
/BIC/ZCARSTTIM TYPE /BIC/OIZCARSTTIM,
InfoObject: ZCAREDTIM Car End Time.
/BIC/ZCAREDTIM TYPE /BIC/OIZCAREDTIM,
InfoObject: 0COMPANY Company.
COMPANY TYPE /BI0/OICOMPANY,
InfoObject: ZCONTRACT Contract.
/BIC/ZCONTRACT TYPE /BIC/OIZCONTRACT,
InfoObject: 0PLANT Plant.
PLANT TYPE /BI0/OIPLANT,
InfoObject: ZLOADTIME Total Vehicle Loading time.
/BIC/ZLOADTIME TYPE /BIC/OIZLOADTIME,
InfoObject: ZSHIPDATE Shipping Date.
/BIC/ZSHIPDATE TYPE /BIC/OIZSHIPDATE,
InfoObject: ZSHIPTIME Shipping Time.
/BIC/ZSHIPTIME TYPE /BIC/OIZSHIPTIME,
InfoObject: ZMNEDDT Manifest End Date.
/BIC/ZMNEDDT TYPE /BIC/OIZMNEDDT,
InfoObject: ZMNEDTM Manifest End Time.
/BIC/ZMNEDTM TYPE /BIC/OIZMNEDTM,
InfoObject: ZLDEDDT Loaded End Date.
/BIC/ZLDEDDT TYPE /BIC/OIZLDEDDT,
InfoObject: ZLDEDTM Loaded End Time.
/BIC/ZLDEDTM TYPE /BIC/OIZLDEDTM,
InfoObject: ZMANVAR Manifest Variance.
/BIC/ZMANVAR TYPE /BIC/OIZMANVAR,
InfoObject: ZTU_TYPE Trpr Unit Type.
/BIC/ZTU_TYPE TYPE /BIC/OIZTU_TYPE,
InfoObject: ZACTULQTY Actual posted quantity.
/BIC/ZACTULQTY TYPE /BIC/OIZACTULQTY,
InfoObject: ZVEDDT Vehicle End Date.
/BIC/ZVEDDT TYPE /BIC/OIZVEDDT,
InfoObject: ZVEDTM Vehicle End Time.
/BIC/ZVEDTM TYPE /BIC/OIZVEDTM,
InfoObject: ZVSTDT Vehicle Start Date.
/BIC/ZVSTDT TYPE /BIC/OIZVSTDT,
InfoObject: ZVSTTM Vehicle Start Time.
/BIC/ZVSTTM TYPE /BIC/OIZVSTTM,
InfoObject: ZTRPT_TYP Vehicle type.
/BIC/ZTRPT_TYP TYPE /BIC/OIZTRPT_TYP,
InfoObject: 0CALMONTH Calendar Year/Month.
CALMONTH TYPE /BI0/OICALMONTH,
InfoObject: 0CALYEAR Calendar Year.
CALYEAR TYPE /BI0/OICALYEAR,
InfoObject: ZLOEDDT Quality Sent End Date.
/BIC/ZLOEDDT TYPE /BIC/OIZLOEDDT,
InfoObject: ZLOEDTM Quality sent End Time.
/BIC/ZLOEDTM TYPE /BIC/OIZLOEDTM,
InfoObject: ZATMDDT At Mine End Date.
/BIC/ZATMDDT TYPE /BIC/OIZATMDDT,
InfoObject: ZATMDTM At Mine End Time.
/BIC/ZATMDTM TYPE /BIC/OIZATMDTM,
InfoObject: ZDELAY Delay Duration.
/BIC/ZDELAY TYPE /BIC/OIZDELAY,
InfoObject: ZSITYP Schedule type.
/BIC/ZSITYP TYPE /BIC/OIZSITYP,
InfoObject: ZDOCIND Reference document indicator.
/BIC/ZDOCIND TYPE /BIC/OIZDOCIND,
InfoObject: 0BASE_UOM Base Unit of Measure.
BASE_UOM TYPE /BI0/OIBASE_UOM,
InfoObject: 0UNIT Unit of Measure.
UNIT TYPE /BI0/OIUNIT,
InfoObject: ZACT_UOM Actual UOM.
/BIC/ZACT_UOM TYPE /BIC/OIZACT_UOM,
Field: RECORD.
RECORD TYPE RSARECORD,
END OF tys_TG_1.
TYPES:
tyt_TG_1 TYPE STANDARD TABLE OF tys_TG_1
WITH NON-UNIQUE DEFAULT KEY.
$$ begin of global - insert your declaration only below this line -
... "insert your code here
$$ end of global - insert your declaration only before this line -
METHODS
end_routine
IMPORTING
request type rsrequest
datapackid type rsdatapid
EXPORTING
monitor type rstr_ty_t_monitors
CHANGING
RESULT_PACKAGE type tyt_TG_1
RAISING
cx_rsrout_abort.
METHODS
inverse_end_routine
IMPORTING
i_th_fields_outbound TYPE rstran_t_field_inv
i_r_selset_outbound TYPE REF TO cl_rsmds_set
i_is_main_selection TYPE rs_bool
i_r_selset_outbound_complete TYPE REF TO cl_rsmds_set
i_r_universe_inbound TYPE REF TO cl_rsmds_universe
CHANGING
c_th_fields_inbound TYPE rstran_t_field_inv
c_r_selset_inbound TYPE REF TO cl_rsmds_set
c_exact TYPE rs_bool.
ENDCLASS. "routine DEFINITION
$$ begin of 2nd part global - insert your code only below this line *
... "insert your code here
$$ end of 2nd part global - insert your code only before this line *
CLASS routine IMPLEMENTATION
CLASS lcl_transform IMPLEMENTATION.
Method end_routine
Calculation of result package via end routine
Note: Update of target fields depends on rule assignment in
transformation editor. Only fields that have a rule assigned,
are updated to the data target.
<-> result package
METHOD end_routine.
*=== Segments ===
FIELD-SYMBOLS:
<RESULT_FIELDS> TYPE tys_TG_1.
DATA:
MONITOR_REC TYPE rstmonitor.
*$*$ begin of routine - insert your code only below this line *-*
Fill the following fields by reading Nomination and Vehicls DSO
SOLD_TO, Customer
data: L_TIMESTAMP1 TYPE timestamp,
L_TIMESTAMP2 TYPE timestamp,
L_TIMESTAMP3 type CCUPEAKA-TIMESTAMP,
L_TIMESTAMP4 type CCUPEAKA-TIMESTAMP,
L_TIMESTAMP5 type CCUPEAKA-TIMESTAMP,
L_TIMESTAMP6 type CCUPEAKA-TIMESTAMP,
L_TIMESTAMP7 TYPE timestamp,
L_TIMESTAMP8 TYPE timestamp,
L_TIMESTAMP9 type timestamp,
L_TIMESTAMP10 type TIMESTAMP,
L_CHAR1(14),
L_CHAR2(14),
l_duration type I,
L_TS TYPE TZONREF-TZONE,
l_flag,
l_nomit TYPE /BIC/OIZNOMIT,
l_error_flag.
l_TS = 'CST'.
Data: EXTRA_PACKAGE type tyt_TG_1.
data: extra_fields type tys_TG_1.
LOOP at RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
clear l_error_flag.
Get sold_to and customer from nomination table.
Select single SOLD_TO /BIC/ZLOCSL /BIC/ZCONTRACT COMPANY
/BIC/ZMNEDDT /BIC/ZMNEDTM /BIC/ZLDEDDT
/BIC/ZLDEDTM SHIP_TO /BIC/ZACTULQTY
/BIC/ZLOEDDT /BIC/ZLOEDTM /BIC/ZDELAY
/BIC/ZATMDDT /BIC/ZATMDTM
/BIC/ZSITYP /BIC/ZDOCIND
into (<RESULT_FIELDS>-SOLD_TO,
<RESULT_FIELDS>-/BIC/ZLOCSL,
<RESULT_FIELDS>-/BIC/ZCONTRACT,
<RESULT_FIELDS>-company,
<RESULT_FIELDS>-/BIC/ZMNEDDT,
<RESULT_FIELDS>-/BIC/ZMNEDTM,
<RESULT_FIELDS>-/BIC/ZLDEDDT,
<RESULT_FIELDS>-/BIC/ZLDEDTM,
<RESULT_FIELDS>-SHIP_TO,
<RESULT_FIELDS>-/BIC/ZACTULQTY,
<RESULT_FIELDS>-/BIC/ZLOEDDT,
<RESULT_FIELDS>-/BIC/ZLOEDTM,
<RESULT_FIELDS>-/BIC/ZDELAY,
<RESULT_FIELDS>-/BIC/ZATMDDT,
<RESULT_FIELDS>-/BIC/ZATMDTM,
<RESULT_FIELDS>-/BIC/ZSITYP,
<RESULT_FIELDS>-/BIC/ZDOCIND)
from /BIC/AZTSW_0000
where /BIC/ZNOMTK = <RESULT_FIELDS>-/BIC/ZNOMTK
AND /BIC/ZNOMIT = <RESULT_FIELDS>-/BIC/ZNOMIT.
Select Invalid Nominations
if sy-subrc <> 0.
l_error_flag = 'X'.
endif.
<RESULT_FIELDS>-customer = <RESULT_FIELDS>-SOLD_TO.
Prepare time stamp for Time Differences
Vehicle Starting Time Stamp
clear : L_TIMESTAMP9,L_TIMESTAMP10.
CONVERT DATE <RESULT_FIELDS>-/BIC/ZCARSTDAT TIME
<RESULT_FIELDS>-/BIC/ZCARSTTIM
INTO TIME STAMP L_TIMESTAMP9 TIME ZONE l_TS.
Vehicle Ending Time Stamp
CONVERT DATE <RESULT_FIELDS>-/BIC/ZCAREDDAT TIME
<RESULT_FIELDS>-/BIC/ZCAREDTIM
INTO TIME STAMP L_TIMESTAMP10 TIME ZONE l_TS.
Clear : L_TIMESTAMP3, L_TIMESTAMP4,
<RESULT_FIELDS>-/BIC/ZVEDTIME,
<RESULT_FIELDS>-/BIC/ZVSTTIME.
<RESULT_FIELDS>-/BIC/ZVEDTIME = L_TIMESTAMP10.
<RESULT_FIELDS>-/BIC/ZVSTTIME = L_TIMESTAMP9.
L_TIMESTAMP3 = L_TIMESTAMP10.
L_TIMESTAMP4 = L_TIMESTAMP9.
Caliculate the load time
IF L_TIMESTAMP3 is initial.
clear <RESULT_FIELDS>-/BIC/ZLOADTIME.
elseif L_TIMESTAMP4 is initial.
clear <RESULT_FIELDS>-/BIC/ZLOADTIME.
else.
CALL FUNCTION 'CCU_TIMESTAMP_DIFFERENCE'
EXPORTING
timestamp1 = L_TIMESTAMP3
timestamp2 = L_TIMESTAMP4
IMPORTING
DIFFERENCE = <RESULT_FIELDS>-/BIC/ZLOADTIME
ENDIF.
Caliculate the Manifest Variance
clear : L_TIMESTAMP5,L_TIMESTAMP6,L_TIMESTAMP7,L_TIMESTAMP8.
CONVERT DATE <RESULT_FIELDS>-/BIC/ZMNEDDT TIME
<RESULT_FIELDS>-/BIC/ZMNEDTM
INTO TIME STAMP L_TIMESTAMP7 TIME ZONE l_TS.
CONVERT DATE <RESULT_FIELDS>-/BIC/ZLDEDDT TIME
<RESULT_FIELDS>-/BIC/ZLDEDTM
INTO TIME STAMP L_TIMESTAMP8 TIME ZONE l_TS.
L_TIMESTAMP5 = L_TIMESTAMP7.
L_TIMESTAMP6 = L_TIMESTAMP8.
Caliculate the Maniefest Variance
IF L_TIMESTAMP5 is initial.
clear <RESULT_FIELDS>-/BIC/ZMANVAR.
elseif L_TIMESTAMP6 is initial.
clear <RESULT_FIELDS>-/BIC/ZMANVAR.
else.
CALL FUNCTION 'CCU_TIMESTAMP_DIFFERENCE'
EXPORTING
timestamp1 = L_TIMESTAMP5
timestamp2 = L_TIMESTAMP6
IMPORTING
DIFFERENCE = <RESULT_FIELDS>-/BIC/ZMANVAR
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDIF.
Delete datapackets with blank nominations
Delete datapackets with blank shipdate and Invalid Time Stamps
*IF <RESULT_FIELDS>-/BIC/ZNOMTK IS INITIAL OR
<RESULT_FIELDS>-/BIC/ZSHIPDATE IS INITIAL.
l_error_flag = 'X'.
*ENDIF.
<RESULT_FIELDS>-/BIC/ZVEHI_NO = 1.
<RESULT_FIELDS>-CALMONTH = <RESULT_FIELDS>-/BIC/ZSHIPDATE(6).
<RESULT_FIELDS>-CALYEAR = <RESULT_FIELDS>-/BIC/ZSHIPDATE(4).
if l_error_flag = 'X'.
Looks like Monitor Entries are not working in SP11.
Hence the following is commented temporarily.
CLEAR MONITOR_REC.
MONITOR_REC-MSGID = '0M'.
MONITOR_REC-MSGTY = 'S'.
MONITOR_REC-MSGNO = '501'.
MONITOR_REC-MSGV1 = <RESULT_FIELDS>-/BIC/ZNOMTK.
MONITOR_REC-recno = sy-tabix.
APPEND MONITOR_REC to MONITOR.
RAISE exception type CX_RSROUT_ABORT.
DELETE RESULT_PACKAGE index sy-tabix.
CLEAR L_ERROR_FLAG.
else.
MODIFY RESULT_PACKAGE FROM <RESULT_FIELDS>.
endif.
clear l_nomit.
l_nomit = <RESULT_FIELDS>-/BIC/ZNOMIT.
extra_fields = <RESULT_FIELDS>.
Actual Qty and Contract details
Select /BIC/ZLOCSL /BIC/ZNOMIT /BIC/ZCONTRACT /BIC/ZACTULQTY
/BIC/ZSITYP /BIC/ZDOCIND
SOLD_TO SHIP_TO COMPANY
into (extra_fields-/BIC/ZLOCSL,
extra_fields-/BIC/ZNOMIT,
extra_fields-/BIC/ZCONTRACT,
extra_fields-/BIC/ZACTULQTY,
extra_fields-/BIC/ZSITYP,
extra_fields-/BIC/ZDOCIND,
extra_fields-SOLD_TO,
extra_fields-SHIP_TO,
extra_fields-company)
from /BIC/AZTSW_0000
where /BIC/ZNOMTK = <RESULT_FIELDS>-/BIC/ZNOMTK AND
/BIC/ZNOMIT <> l_NOMIT.
INSERT extra_fields into table EXTRA_PACKAGE.
endselect.
ENDLOOP.
Append lines of extra_package to RESULT_PACKAGE.
*-- fill table "MONITOR" with values of structure "MONITOR_REC"
*- to make monitor entries
... "to cancel the update process
raise exception type CX_RSROUT_ABORT.
$$ end of routine - insert your code only before this line -
ENDMETHOD. "end_routine
Method inverse_end_routine
This subroutine needs to be implemented only for direct access
(for better performance) and for the Report/Report Interface
(drill through).
The inverse routine should transform a projection and
a selection for the target to a projection and a selection
for the source, respectively.
If the implementation remains empty all fields are filled and
all values are selected.
METHOD inverse_end_routine.
$$ begin of inverse routine - insert your code only below this line-
... "insert your code here
$$ end of inverse routine - insert your code only before this line -
ENDMETHOD. "inverse_end_routine
ENDCLASS. "routine IMPLEMENTATIONHi,
Most probably you are appending some records in the data package or deleting from the data package through end routine or expert routine or start routine.
I just solved it.....you will have to import the note 1180163.
Then modify the code which you are using....and include the function module as mentioned in the note 1223532.
You need to add the function module just before you append the records.This will work perfectly.
Thanks
Ajeet -
SM58 - IDoc adapter inbound: IDoc data record table contains no entries
Trying to send Idocs from SAP ECC6.0 via PI 7.0 up until 2 days ago there was no problem.
Since yesterday, only one specific type of Idoc does not make it into XI (PI). In the Idoc monitor (WE02) the idocs that were created gives status 3 which is good. But all Idocs of that specific type (ZRESCR01) does not go to XI. I can only find them bakc in SM58 where it gives the following message:
IDoc adapter inbound: IDoc data record table contains no entries
I have checked SAP notes 1157385 and also 940313, none of them gives me any more insight into this error. I have also checked all the configuration in WE20, SM59, and in XI (repository and directory) and in XI IDX1, IDX2 but could not find anything that would cause this. I can also not think of anything that changed since 2 days ago.
Please point me in the right direction.hi,
i think in sm 58 u can find entries only when there is some failure in login credential .
if there is change in IDoc structure than you have to reimport the idoc metadata defination at IDX2.otherwise not requird.
please check the logical system name pointing to the your requird target system....
please also verify thet your port should not be blocked.
pls find the link it may help
Monitoring the IDOC Adapter in XI/PI using IDX5
regards,
navneet -
SHARED MEMORY AND DATABASE MEMORY giving problem.
Hello Friends,
I am facing problem with EXPORT MEMORY and IMPORT MEMORY.
I have developed one program which will EXPORT the internal table and some variables to the memory.This program will call another program via background job. IMPORT memory used in another program to get the first program data.
This IMPORT command is working perfect in foreground. But it is not working in background.
So, I have reviewed couple of forums and I tried both SHARED MEMORY AND DATABASE MEMORY. But no use. Still background is giving problem.
When I remove VIA JOB parameter in the SUBMIT statement it is working. But i need to execute this program in background via background job. Please help me . what should I do?
pls find the below code of mine.
option1
EXPORT TAB = ITAB
TO DATABASE indx(Z1)
FROM w_indx
CLIENT sy-mandt
ID 'XYZ'.
option2
EXPORT ITAB FROM ITAB
TO SHARED MEMORY indx(Z1)
FROM w_indx
CLIENT sy-mandt
ID 'XYZ'.
SUBMIT ZPROG2 TO SAP-SPOOL
SPOOL PARAMETERS print_parameters
WITHOUT SPOOL DYNPRO
*_VIA JOB name NUMBER number*_
AND RETURN.
===
Hope every bidy understood the problem.
my sincere request is ... pls post only relavent answer. do not post dummy answer for points.
Thanks
RaghuHi.
You can not exchange data between your programs using ABAP memory, because this memory is shared between objects within the same internal session.
When you call your report using VIA JOB, a new session is created.
Instead of using EXPORT and IMPORT to memory, put both programs into the same Function Group, and use global data objects of the _TOP include to exchange data.
Another option, is to use SPA/GPA parameters (SET PARAMETER ID / GET PARAMETER ID), because SAP memory it is available between all open sessions. Of course, it depends on wich type of data you want to export.
Hope it was helpful,
Kind regards.
F.S.A. -
Base Table for problem code in Cs_incidents_all_b
hi
in cs_incidents_all_b we have problem_code. the does not contain any data ... we have any tl table for problem code i have cssr_prob_code_mapping_detail but if i query this
SELECT dra.repair_number,
items.description item_desc,
prob.problem_code,
fndl.meaning flow_status_name,
inc.summary,
nvl(cp.instance_number,'Not availble') ib_instance_number
FROM csd_repairs dra,
csd_repair_types_tl drtt,
cs_incidents_all_b sr,
csi_item_instances cp,
fnd_lookups fndl,
csd_flow_statuses_b fsb,
mtl_system_items_kfv items,
mtl_units_of_measure_tl uom,
jtf_rs_resource_extns_tl rstl,
jtf_rs_groups_tl rgtl,
fnd_lookups plkup,
cs_incidents_all_tl inc,
cs_sr_prob_code_mapping_detail prob,
cs_incident_types_b ty
WHERE dra.repair_type_id = drtt.repair_type_id
AND drtt.language = userenv('LANG')
AND dra.repair_mode = 'WIP'
AND dra.incident_id = sr.incident_id
AND dra.CUSTOMER_PRODUCT_ID = cp.INSTANCE_ID (+)
AND dra.flow_status_id = fsb.flow_status_id
AND fsb.flow_status_code = fndl.lookup_code
AND fndl.lookup_type = 'CSD_REPAIR_FLOW_STATUS'
AND dra.inventory_item_id = items.inventory_item_id
AND dra.unit_of_measure = uom.uom_code
AND uom.language = userenv('LANG')
AND dra.resource_id = rstl.resource_id (+)
AND rstl.category (+) = 'EMPLOYEE'
AND rstl.language (+) = userenv('LANG')
AND dra.owning_organization_id = rgtl.group_id (+)
AND rgtl.language (+) = userenv('LANG')
AND dra.ro_priority_code = plkup.lookup_code(+)
AND plkup.lookup_type(+) = 'CSD_RO_PRIORITY'
AND items.organization_id = cs_std.get_item_valdn_orgzn_id
AND inc.incident_id =dra.incident_id
and ty.incident_type_id=sr.incident_type_id
and prob.incident_type_id=ty.incident_type_id
AND fndl.meaning in('Open')
order by dra.repair_numbereach diffrent problem codes for same repair number here i am want records relevant to Depot RepairIn 11.5.9, the problem and resolution codes are stored in FND_LOOKUP_VALUES table with lookup type as 'REQUEST_PROBLEM_CODE' and 'REQUEST_RESOLUTION_CODE'. I'm hoping you could still use these tables to find problem codes, even if you were on 11.5.10 or R12.
Join would be something like:
WHERE fnd_lookup_values.lookup_type = 'REQUEST_PROBLEM_CODE'
AND fnd_lookup_values.problem_code = cs_incidents_all_b.problem_code
Regarding restricting the query for Depot Repair service requests, you need to restrict by the the incident_type_id for this type of SRs (like id for Depot incident type is 10003 for us).
HTH
Alka -
i am putting a query to delete 10000 records but it is getting hanged in sqlplus and Toad. I was putting 4 conditions in the 'where' clause but then i made it into only 1 clause but still it is giving a feeling of getting hanged. The query is like :
delete from ppbs_inv_sim_serial where sim_serial_no between '899105203112085000' and '899105203112094999'
Please help in solving the doubt as it is urgent.
regardsAs RAO said, you can check any locks before carry out the sentence. You can check if there is any lock with this
script:
col object_name format a20
col username format a10
col oracle_username format a10
col process format a15
col owner format a10
prompt ****************************************************************
prompt *** Object Lock Contention ***
prompt ****************************************************************
set pages 0
set linesize 150
select 'Date : '||to_char(sysdate,'DD/MM/YYYY')||' Time : '||to_char(sysdate,'HH:MI:SS') from dual;
select 'Database Name : '||name from sys.v_$database;
set pages 1000
SELECT DISTINCT
O.OBJECT_NAME,
SH.USERNAME,
SH.SID,
SW.USERNAME,
SW.SID,
DECODE(LH.LMODE,
1, 'null',
2, 'row share',
3, 'row exclusive',
4, 'share',
5, 'share row exclusive',
6, 'exclusive')
FROM DBA_OBJECTS O,
V$SESSION SW,
V$LOCK LW,
V$SESSION SH,
V$LOCK LH
WHERE LH.ID1 = O.OBJECT_ID
AND LH.ID1 = LW.ID1
AND SH.SID = LH.SID
AND SW.SID = LW.SID
AND SH.LOCKWAIT IS NULL
AND SW.LOCKWAIT IS NOT NULL
AND LH.TYPE = 'TM'
AND LW.TYPE = 'TM'
prompt Press Enter to continue ...
pause
prompt ************************************************************
prompt *** Object Lock Information ***
prompt ************************************************************
SELECT
A.OBJECT_NAME,
A.OWNER,
C.SERIAL#,
B.OBJECT_ID,
B.SESSION_ID,
B.ORACLE_USERNAME,
B.OS_USER_NAME,
B.PROCESS,
DECODE(B.LOCKED_MODE,
0,'None',
1,'Null',
2,'Row-S (SS)',
3,'Row-X (SX)',
4,'Share',
5,'S/Row-X (SSX)',
6,'Exclusive') LMODE
FROM DBA_OBJECTS A, V$LOCKED_OBJECT B, V$SESSION C
WHERE A.OBJECT_ID = B.OBJECT_ID AND C.SID = B.SESSION_ID
ORDER BY A.OWNER, A.OBJECT_NAME, C.SERIAL#
Joel P�rez -
Hi,
I am getting problem while setting the fullname attribute of the user while creating and assigning the resources [AD,Domino,LDAP]. i am using PeopleSoftActiveSync Resource, which is our Authoritative source. whenever a new entry comes to PeopleSoft, i have to create an Account in IDM and to all the resources. I am using Process Rule to call the create WorkFlow from where i am calling different Sub Process for creating resources. but some of the resources are giving problem and the error which i am getting using "WF_ACTION_ERROR" says "Missing required Attribute Fullname".
Eventhough i am setting fullname attribute while creating the IDM account. But the resources are assigned properly.
Please help me.....Hello Srinivasa
Two years ago I had the very same problem when I added a new field to an already existing and filled customer table. I debugged the maintenance view and saw that the new field contained an undefined value but not the initial value.
You could try and use the DB utility (<b>SE14</b>) and activate the table again (of course without deleting the table entries).
Alternatively, you could write a simple ABAP to initialize the field value, e.g.:
DATA:
gs_mchb TYPE mchb,
gt_mchb TYPE STANDARD TABLE OF mchb.
SELECT * INTO TABLE gt_mchb.
gs_mchb-zvbeln_inquiry = ' '. " space
MODIFY gt_mchb FROM gs_mchb
TRANSPORTING zvbeln_inquiry
WHERE ( zvbeln_inquiry ne ' ' ).
UPDATE mchb FROM TABLE gt_mchb.
Regards
Uwe -
Help with querying a 200 million record table
Hi ,
I need to query a 200 million record table which is partitioned by monthly activity.
But my problem is I need to see how many activities occured on one account in a time frame.
If there are 200 partitions, I need to go into all the partitions, get the activities of the account in the partition and at the end give the number of activities.
Fortunately, only activity is expected for an account in the partition which may be present or absent.
if this table had 100 records, i would use this..
select account_no, count(*)
from Acct_actvy
group by account_no;Must stress that it is critical that you not write code (SQL or PL/SQL) that uses hardcoded partition names to find data.
That approach is very risk, prone to runtime errors, difficult to maintain and does not scale. It is not worth it.
From the developer's side, there should be total ignorance to the fact that a table is partitioned. A developer must treat a partition table no different than any other table.
To give you an idea.. this a copy-and-paste from a SQL*Plus session doing what you want to do. Against a partitioned table at least 3x bigger than yours. It covers about a 12 month period. There's a partition per day - and empty daily partitions for the next 2 years. The SQL aggregation is monthly. I selected a random network address to illustrate.
SQL> select count(*) from x25_calls;
COUNT(*)
619491919
Elapsed: 00:00:19.68
SQL>
SQL> select TRUNC(callendtime,'MM') AS MONTH, sourcenetworkaddress, count(*) from x25_calls where sourcenetworkaddress = '3103165962'
2 group by TRUNC(callendtime,'MM'), sourcenetworkaddress;
MONTH SOURCENETWORKADDRESS COUNT(*)
2005/09/01 00:00:00 3103165962 3599
2005/10/01 00:00:00 3103165962 1184
2005/12/01 00:00:00 3103165962 4
2005/06/01 00:00:00 3103165962 1
2005/04/01 00:00:00 3103165962 560
2005/08/01 00:00:00 3103165962 101
2005/03/01 00:00:00 3103165962 3330
7 rows selected.
Elapsed: 00:00:19.72As you can see - not a single reference to any partitioning. Excellent performance, despite running on an old K-class HP server.
The reason for the performance is simple. A correctly designed and implemented partitioning scheme that caters for most of the queries against the table. Correctly designed and implemented indexes - especially local bitmap indexes. Without any hacks like partition names and the like... -
Master data query display limited to 10000 records
Hi,
When I run a query based on master data objects like 0CUSTOMER, the query displays only 10000 records instead of about 50000. This happens with all other master data items. This happens after the upgrade to 2004S. We still use frontend 3.5. Transaction data displays are correct. Has anybody faced this problem? Can someone suggest the solution for this?
Thanks.
ParamHi
Check out note -
<b>955487</b>- Master data InfoProvider reads only 10000 records
Implement the advanced corrections as SP-09 is too far to be released.
Hope it Helps.
Chetan
@CP..
Maybe you are looking for
-
Nokia mobile VPN Client - split tunneling
Hi I'm trying to get Nokia mobile CPN Client working with split tunneling on a Cisco firewall. I have full access to all on my internal lan's when I make the VPN tunnel, so tunnel is up and working. But I do not have access to anything in the interne
-
Canon ir 3200 printer scanner: two issues
I have MacBook Pro running OSX 10.5.4. Some issues have shown up when interfacing with Canon color imagerunner C3200, connected through IP. Problem 1: showed up when I installed Leopard about 3 months ago. The Canon C3200 is a color copier that can p
-
Dear All, When executing J1INPR during a particular period, I am getting the following error in the second screen. "Balancing field profit center for line item 001 is not filled" i have checked note 1063850. but there i am not able to trace out in wh
-
ABAP Program or Function Module to activate infospoke/openhub table
Hi, Post migration to production, when i am trying to load the data in open hub table, its showing OH table is not active. i had tried to all aspect but i am unable to load the data. is there any ABAP Program or Function Module to activate infospoke/
-
Service Policy won't attach to interface - NO error
Hi, Am doing some simple CE VoIP QoS for a IPSEC/GRE Customer. I try to ATTACH the policy to the tunnel outbound and the command is accepted without any error but nothing appears in the config. Here's the base config: class-map match-all IPSEC-VPN ma