Query help needed for Sales order panel user field query.
I have a user defined form field on sales order row level called = U_DEPFEEAMT
1, I would like this field to get the value from a field on this sales order row level multiplied by what is in point 2 below. The details of field in point 1 is :
Form=139, item=38, pane=1, column=10002117, and row=1
2. The contents in field 1 should be multiplied by a value coming from another user field linked to OITM master item.
The details of user field attached to OITM is :
OITM.U_DepositFeeON
Appreciate your help.
Thank you.
Try this one:
SELECT T0.U_DepositFeeON*$[$38.10002117.number\]
FROM dbo.OITM T0
WHERE T0.ItemCode = $[$38.1.0\]
Thanks,
Gordon
Similar Messages
-
Table for sales order change by field
Dear Friends,
I want to get the data @ table level for the list of sales orders with changed by field. Can you please tell which table/s for sales order change by field can be used?
Thanks,
pinkyHi
You can get the changes for SALES ORDER item in CDPOS table
Goto SE16 and find the changes.
hope it will serve your purpose.
thanks,
santosh -
Detecting change on header and item texts for sales order in user exit
Hi,
In the user exit of VA02, I need to identify/detect if header or item texts for sales order is changed or not.
Please advise on this.
Regards,
ShreyasNormally system stores the old values in XTables and new values in YTables. Check if you have access to these in your user exit. If you give the user exit name, someone will be able to guide you.
hith
Sunil Achyut -
Name of the structure needed for sales order user exit
Hi,
I am planning to write a user exit which will insert the data into my ztable the moment the new sales order is created and is saved.
I have identified FORM USEREXIT_SAVE_DOCUMENT as the necessary user exit. But the problem is that while inserting the data into the ztable I cannot insert the data from vbak as the data will be inserted into this table after the SO is saved.
Thus I need to identify a structure which is used to populate the vbak table so that it simultaneously inserts the data in the ztable as well. Can anyone help me with the name of the structure so that these fields i.e. vbeln, vkorg, vtweg, spart are inserted into my ztable?
Also need to know how to convert the net value i.e. vbak-netwr in Indian Rupees as it gets stored as Dollars ....
Thanks,
Vinod.Hi,
One of the structure being used is RV45A, there are several other str's being used as well for various calculations,
check out program SAPMV45A for the same.
& in order to convert vbak-netwr to rupees use the below statemetnt,
WRITE vbak-netwr TO zvbak-netwr CURRENCY INR.
Regards,
Raghavendra
Message was edited by:
raghavendra ay -
Query help needed for querybuilder to use with lcm cli
Hi,
I had set up several queries to run with the lcm cli in order to back up personal folders, inboxes, etc. to lcmbiar files to use as backups. I have seen a few posts that are similar, but I have a specific question/concern.
I just recently had to reference one of these back ups only to find it was incomplete. Does the query used by the lcm cli also only pull the first 1000 rows? Is there a way to change this limit somwhere?
Also, since when importing this lcmbiar file for something 'generic' like 'all personal folders', pulls in WAY too much stuff, is there a better way to limit this? I am open to suggestions, but it would almost be better if I could create individual lcmbiar output files on a per user basis. This way, when/if I need to restore someone's personal folder contents, for example, I could find them by username and import just that lcmbiar file, as opposed to all 3000 of our users. I am not quite sure how to accomplish this...
Currently, with my limited windows scripting knowledge, I have set up a bat script to run each morning, that creates a 'runtime' properties file from a template, such that the lcmbiar file gets named uniquely for that day and its content. Then I call the lcm_cli using the proper command. The query within the properties file is currently very straightforward - select * from CI_INFOOBJECTS WHERE SI_ANCESTOR = 18.
To do what I want to do...
1) I'd first need a current list of usernames in a text file, that could be read (?) in and parsed to single out each user (remember we are talking about 3000) - not sure the best way to get this.
2) Then instead of just updating the the lcmbiar file name with a unique name as I do currently, I would also update the query (which would be different altogether): SELECT * from CI_INFOOBJECTS where SI_OWNER = '<username>' AND SI_ANCESTOR = 18.
In theory, that would grab everything owned by that user in their personal folder - right? and write it to its own lcmbiar file to a location I specify.
I just think chunking something like this is more effective and BO has no built in back up capability that already does this. We are on BO 4.0 SP7 right now, move to 4.1 SP4 over the summer.
Any thoughts on this would be much appreciated.
thanks,
MissyJust wanted to pass along that SAP Support pointed me to KBA 1969259 which had some good example queries in it (they were helping me with a concern I had over the lcmbiar file output, not with query design). I was able to tweak one of the sample queries in this KBA to give me more of what I was after...
SELECT TOP 10000 static, relationships, SI_PARENT_FOLDER_CUID, SI_OWNER, SI_PATH FROM CI_INFOOBJECTS,CI_APPOBJECTS,CI_SYSTEMOBJECTS WHERE (DESCENDENTS ("si_name='Folder Hierarchy'","si_name='<username>'"))
This exports inboxes, personal folders, categories, and roles, which is more than I was after, but still necessary to back up.. so in a way, it is actually better because I have one lcmbiar file per user - contains all their 'personal' objects.
So between narrowing down my set of users to only those who actually have saved things to their personal folder and now having a query that actually returns what I expect it to return, along with the help below for a job to clean up these excessive amounts of promotion jobs I am now creating... I am all set!
Hopefully this can help someone else too!
Thanks,
missy -
Pagination query help needed for large table - force a different index
I'm using a slight modification of the pagination query from over at Ask Tom's: [http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html]
Mine looks like this when fetching the first 100 rows of all members with last name Smith, ordered by join date:
SELECT members.*
FROM members,
SELECT RID, rownum rnum
FROM
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowidThe difference between this and the one at Ask Tom's is that my innermost query just returns the ROWID. Then in the outermost query we join the ROWIDs returned to the members table, after we have pruned the ROWIDs down to only the chunk of 100 we want. This makes it MUCH faster (verifiably) on our large tables, as it is able to use the index on the innermost query (well... read on).
The problem I have is this:
SELECT rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateThis will use the index for the predicate column (last_name) instead of the unique index I have defined for the joindate column (joindate, sequence). (Verifiable with explain plan). It is much slower this way on a large table. So I can hint it using either of the following methods:
SELECT /*+ index(members, joindate_idx) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
SELECT /*+ first_rows(100) */ rowid as RID
FROM members
WHERE last_name = 'Smith'
ORDER BY joindateEither way, it now uses the index of the ORDER BY column (joindate_idx), so now it is much faster as it does not have to do a sort (remember, VERY large table, millions of records). So that seems good. But now, on my outermost query, I join the rowid with the meaningful columns of data from the members table, as commented below:
SELECT members.* -- Select all data from members table
FROM members, -- members table added to FROM clause
SELECT RID, rownum rnum
FROM
SELECT /*+ index(members, joindate_idx) */ rowid as RID -- Hint is ignored now that I am joining in the outer query
FROM members
WHERE last_name = 'Smith'
ORDER BY joindate
WHERE rownum <= 100
WHERE rnum >= 1
and RID = members.rowid -- Merge the members table on the rowid we pulled from the inner queriesOnce I do this join, it goes back to using the predicate index (last_name) and has to perform the sort once it finds all matching values (which can be a lot in this table, there is high cardinality on some columns).
So my question is, in the full query above, is there any way I can get it to use the ORDER BY column for indexing to prevent it from having to do a sort? The join is what causes it to revert back to using the predicate index, even with hints. Remove the join and just return the ROWIDs for those 100 records and it flies, even on 10 million records.
It'd be great if there was some generic hint that could accomplish this, such that if we change the table/columns/indexes, we don't need to change the hint (the FIRST_ROWS hint is a good example of this, while the INDEX hint is the opposite), but any help would be appreciated. I can provide explain plans for any of the above if needed.
Thanks!Lakmal Rajapakse wrote:
OK here is an example to illustrate the advantage:
SQL> set autot traceonly
SQL> select * from (
2 select a.*, rownum x from
3 (
4 select a.* from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 )
9 where x >= 1100
10 /
101 rows selected.
Execution Plan
Plan hash value: 3711662397
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 1 | VIEW | | 1200 | 521K| 192 (0)| 00:00:03 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 1200 | 506K| 192 (0)| 00:00:03 |
| 4 | TABLE ACCESS BY INDEX ROWID| EVENTS | 253M| 34G| 192 (0)| 00:00:03 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 1200 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("X">=1100)
2 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
443 consistent gets
0 physical reads
0 redo size
25203 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
SQL>
SQL>
SQL> select * from aoswf.events a, (
2 select rid, rownum x from
3 (
4 select rowid rid from aoswf.events a
5 order by EVENT_DATETIME
6 ) a
7 where rownum <= 1200
8 ) b
9 where x >= 1100
10 and a.rowid = rid
11 /
101 rows selected.
Execution Plan
Plan hash value: 2308864810
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 201K| 261K (1)| 00:52:21 |
| 1 | NESTED LOOPS | | 1200 | 201K| 261K (1)| 00:52:21 |
|* 2 | VIEW | | 1200 | 30000 | 260K (1)| 00:52:06 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 253M| 2895M| 260K (1)| 00:52:06 |
| 5 | INDEX FULL SCAN | EVEN_IDX02 | 253M| 4826M| 260K (1)| 00:52:06 |
| 6 | TABLE ACCESS BY USER ROWID| EVENTS | 1 | 147 | 1 (0)| 00:00:01 |
Predicate Information (identified by operation id):
2 - filter("X">=1100)
3 - filter(ROWNUM<=1200)
Statistics
8 recursive calls
0 db block gets
117 consistent gets
0 physical reads
0 redo size
27539 bytes sent via SQL*Net to client
281 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed
Lakmal (and OP),
Not sure what advantage you are trying to show here. But considering that we are talking about pagination query here and order of records is important, your 2 queries will not always generate output in same order. Here is the test case:
SQL> select * from v$version ;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for Linux: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 2
optimizer_features_enable string 10.2.0.1
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_mode string ALL_ROWS
optimizer_secure_view_merging boolean TRUE
SQL> show parameter pga
NAME TYPE VALUE
pga_aggregate_target big integer 103M
SQL> create table t nologging as select * from all_objects where 1 = 2 ;
Table created.
SQL> create index t_idx on t(last_ddl_time) nologging ;
Index created.
SQL> insert /*+ APPEND */ into t (owner, object_name, object_id, created, last_ddl_time) select owner, object_name, object_id, created, sysdate - dbms_random.value(1, 100) from all_objects order by dbms_random.random;
40617 rows created.
SQL> commit ;
Commit complete.
SQL> exec dbms_stats.gather_table_stats(user, 'T', cascade=>true);
PL/SQL procedure successfully completed.
SQL> select object_id, object_name, created from t, (select rid, rownum rn from (select rowid rid from t order by created desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME CREATED
47686 ALL$OLAP2_JOIN_KEY_COLUMN_USES 28-JUL-2009 08:08:39
47672 ALL$OLAP2_CUBE_DIM_USES 28-JUL-2009 08:08:39
47681 ALL$OLAP2_CUBE_MEASURE_MAPS 28-JUL-2009 08:08:39
47682 ALL$OLAP2_FACT_LEVEL_USES 28-JUL-2009 08:08:39
47685 ALL$OLAP2_AGGREGATION_USES 28-JUL-2009 08:08:39
47692 ALL$OLAP2_CATALOGS 28-JUL-2009 08:08:39
47665 ALL$OLAPMR_FACTTBLKEYMAPS 28-JUL-2009 08:08:39
47688 ALL$OLAP2_DIM_LEVEL_ATTR_MAPS 28-JUL-2009 08:08:39
47689 ALL$OLAP2_DIM_LEVELS_KEYMAPS 28-JUL-2009 08:08:39
47669 ALL$OLAP9I2_HIER_DIMENSIONS 28-JUL-2009 08:08:39
47666 ALL$OLAP9I1_HIER_DIMENSIONS 28-JUL-2009 08:08:39
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc ;
OBJECT_ID OBJECT_NAME LAST_DDL_TIME
37534 com/sun/mail/smtp/SMTPMessage 06-FEB-2010 03:46:14
13133 oracle/jdbc/driver/OracleLog$3 06-FEB-2010 03:45:44
11749 /b9fe5b99_OraRTStatementComman 06-FEB-2010 03:43:49
42266 SI_GETCLRHSTGRFTR 06-FEB-2010 03:40:20
16695 /2940a364_RepIdDelegator_1_3 06-FEB-2010 03:38:17
36539 sun/io/ByteToCharMacHebrew 06-FEB-2010 03:28:57
26815 /7a628fb8_DefaultHSBChooserPan 06-FEB-2010 03:26:55
14044 /d29b81e1_OldHeaders 06-FEB-2010 03:12:12
36145 /4e492b6f_SerProfileToClassErr 06-FEB-2010 03:11:09
12920 /25f8f3a5_BasicSplitPaneUI 06-FEB-2010 03:11:06
15752 /2f494dce_JDWPThreadReference 06-FEB-2010 03:09:31
11 rows selected.
SQL> set autotrace traceonly
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid order by last_ddl_time desc
2 ;
11 rows selected.
Execution Plan
Plan hash value: 44968669
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 180 (2)| 00:00:03 |
| 1 | SORT ORDER BY | | 1200 | 91200 | 180 (2)| 00:00:03 |
|* 2 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 3 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 4 | COUNT STOPKEY | | | | | |
| 5 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 6 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 7 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("T".ROWID="T1"."RID")
3 - filter("RN">=1190)
4 - filter(ROWNUM<=1200)
Statistics
1 recursive calls
0 db block gets
348 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
343 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from t, (select rid, rownum rn from (select rowid rid from t order by last_ddl_time desc) where rownum <= 1200) t1 where rn >= 1190 and t.rowid = t1.rid ;
11 rows selected.
Execution Plan
Plan hash value: 168880862
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 1 | HASH JOIN | | 1200 | 91200 | 179 (2)| 00:00:03 |
|* 2 | VIEW | | 1200 | 30000 | 98 (0)| 00:00:02 |
|* 3 | COUNT STOPKEY | | | | | |
| 4 | VIEW | | 40617 | 475K| 98 (0)| 00:00:02 |
| 5 | INDEX FULL SCAN DESCENDING| T_IDX | 40617 | 793K| 98 (0)| 00:00:02 |
| 6 | TABLE ACCESS FULL | T | 40617 | 2022K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("T".ROWID="T1"."RID")
2 - filter("RN">=1190)
3 - filter(ROWNUM<=1200)
Statistics
0 recursive calls
0 db block gets
349 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> select object_id, object_name, last_ddl_time from (select t1.*, rownum rn from (select * from t order by last_ddl_time desc) t1 where rownum <= 1200) where rn >= 1190 order by last_ddl_time desc ;
11 rows selected.
Execution Plan
Plan hash value: 882605040
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 1 | VIEW | | 1200 | 62400 | 80 (2)| 00:00:01 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 40617 | 1546K| 80 (2)| 00:00:01 |
|* 4 | SORT ORDER BY STOPKEY| | 40617 | 2062K| 80 (2)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T | 40617 | 2062K| 80 (2)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("RN">=1190)
2 - filter(ROWNUM<=1200)
4 - filter(ROWNUM<=1200)
Statistics
175 recursive calls
0 db block gets
388 consistent gets
0 physical reads
0 redo size
1063 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
4 sorts (memory)
0 sorts (disk)
11 rows processed
SQL> set autotrace off
SQL> spool offAs you will see, the join query here has to have an ORDER BY clause at the end to ensure that records are correctly sorted. You can not rely on optimizer choosing NESTED LOOP join method and, as above example shows, when optimizer chooses HASH JOIN, oracle is free to return rows in no particular order.
The query that does not involve join always returns rows in the desired order. Adding an ORDER BY does add a step in the plan for the query using join but does not affect the other query. -
Help needed in sales order/ quality management
In standard sap it is possible to assign an inspection to the delivery type. But client wants a non standard sap inspection to be done at time of order. is it possible? If it is please do help me in customizing this. Thank you in advance.
M. Alihi,
Please try table VBFA with fields VBELV = <sales order> and VBTYP_N = 'J' (Shipping/Delivery).
and also try this also
if you know delivery number for order ,then you could use FM VTTP_READ
if you need delivery number from sales order then use VBAP-VGBEL
pass delivery number to FM or pass the value to VTTP Table
thanks,
<b><REMOVED BY MODERATOR></b>
Message was edited by:
Alvaro Tejada Galindo -
Hi all,
I am created UDF attachment link in Sale order once attachment done it will raise the A/R invoice template.
Option copy from sale order to AR invoice should be block in store procedure
" Without attachment A/R invoice should not raise"
How to write Store procedure transaction in SQLHi Ranjith,
try this one...
for Line UDF
IF (@object_type = '13') and (@transaction_type IN ('A', 'U'))
BEGIN
IF EXISTS (SELECT T0.Docentry from OINV T0 Inner Join INV1 T1 ON T0.Docentry=T1.Docentry where
T1.DocEntry = @list_of_cols_val_tab_del and (T1.[YOUR_UDF] is null or T1.[YOUR_UDF] ='' ))
BEGIN
Select @error = 10, @error_message = 'Sales Invoice cannot be raise W/O sales Order attachement'
END
END
for Header UDF
IF (@object_type = '13') and (@transaction_type IN ('A', 'U'))
BEGIN
IF EXISTS (SELECT T0.Docentry from OINV T0 where
T0.DocEntry = @list_of_cols_val_tab_del and (T0.[YOUR_UDF] is null or T0.[YOUR_UDF] ='' ))
BEGIN
Select @error = 10, @error_message = 'Sales Invoice cannot be raise W/O sales Order attachement'
END
END
regards,
Fidel -
Help needed for data updation in User Defined Tables
Hello Experts,
I am developing one add-on in SAP B1 8.8 to input data in a User Defined Table described as under
Table Name
DriverMst UDT Type is No Object
Description
Stores the Driver master data which are used to get reference in Sale Delivery Form and Driver data management activity
User defined fields
Data Name
Data source
Size
Pane Level
Description
Driver Code
Code
Alphanumeric
0
No object table fixed field
System Name
Name
Alphanumeric
30
0
No object table fixed field
Full Name
FullName
Text
50
0
Father Name
FatherName
Text
50
0
Birth Date
BirthDate
Date
0
Phone Number
PhoneNo
Alphanumeric
50
0
Mobile No
MobileNo
Alphanumeric
13
0
I have created one form using screen painter displaying text boxes and bind them to the table.
This form is working absolutely fine when there are some data in table (i.e. Browse using navigation)
My problem is, when I click add button from tool bar the "OK" button turn to "Add" that means the form is set to Add mode, but when I click "Add" button after entering some data nothing happens and input data is not stored in Table. The same "OK" Button turned to "Update" when I do changes in loaded data, but my changes are not reflected to table after I click "Update".Thanks Nagarajan,
None.
There is no such query. The table fields is directly linked to Edit Box or Combo Box in form.
From the examples I learned that I have to do something like this to get my table updated
Dim oUsrTbl As SAPbobsCOM.UserTable
Dim Res As Integer
oUsrTbl = oCompany.UserTables.Item("DRIVERMST")
oUsrTbl.Code = oBPC.Value 'Item Specific of Driver Code Edit Box
oUsrTbl.Name = Left(oBPN.Value, 30) 'Item Specific of Name Edit Box
oUsrTbl.UserFields.Fields.Item("U_FullName").Value = oMFN.Value
oUsrTbl.UserFields.Fields.Item("U_FatherName").Value = oFTHN.Value
oUsrTbl.UserFields.Fields.Item("U_BirthDate").Value = oDOB.Value
oUsrTbl.UserFields.Fields.Item("U_PhoneNo").Value = oPHN.Value
(Similar For rest ofthe fields)
Res = oUsrTbl.Add()
Just let me know that is this necessary to do like above.. To be frank there are few more fields and matrices on the form which I didn't mentioned. I am just trying to get recovered from first step to proceed further.
Regards -
Tweak for sql query - help needed for smalll change
Hi.
I am trying to run a script that checks for used space on all tablespaces and returns the results.
So far so good:
set lines 200 pages 2000
col tablespace_name heading 'Tablespace' format a30 truncate
col total_maxspace_mb heading 'MB|Max Size' format 9G999G999
col total_allocspace_mb heading 'MB|Allocated' format 9G999G999
col used_space_mb heading 'MB|Used' format 9G999G999D99
col free_space_mb heading 'MB|Free Till Max' like used_space_mb
col free_space_ext_mb heading 'MB|Free Till Ext' like used_space_mb
col pct_used heading '%|Used' format 999D99
col pct_free heading '%|Free' like pct_used
break on report
compute sum label 'Total Size:' of total_maxspace_mb total_allocspace_mb used_space_mb - free_space_mb (used_space_mb/total_maxspace_mb)*100 on report
select
alloc.tablespace_name,
(alloc.total_allocspace_mb - free.free_space_mb) used_space_mb,
free.free_space_mb free_space_ext_mb,
((alloc.total_allocspace_mb - free.free_space_mb)/alloc.total_maxspace_mb)*100 pct_used,
((free.free_space_mb+(alloc.total_maxspace_mb-alloc.total_allocspace_mb))/alloc.total_maxspace_mb)*100 pct_free
FROM (SELECT tablespace_name,
ROUND(SUM(CASE WHEN maxbytes = 0 THEN bytes ELSE maxbytes END)/1048576) total_maxspace_mb,
ROUND(SUM(bytes)/1048576) total_allocspace_mb
FROM dba_data_files
WHERE file_id NOT IN (SELECT FILE# FROM v$recover_file)
GROUP BY tablespace_name) alloc,
(SELECT tablespace_name,
SUM(bytes)/1048576 free_space_mb
FROM dba_free_space
WHERE file_id NOT IN (SELECT FILE# FROM v$recover_file)
GROUP BY tablespace_name) free
WHERE alloc.tablespace_name = free.tablespace_name (+)
ORDER BY pct_used DESC
The above returns something like this:
MB MB % %
Tablespace Used Free Till Ext Used Free
APPS_TS_ARCHIVE 1,993.13 54.88 97.32 2.68
APPS_TS_TX_IDX 14,756.13 1,086.88 91.37 8.63
APPS_TS_TX_DATA 20,525.75 594.25 80.18 19.82
APPS_TS_MEDIA 6,092.00 180.00 74.37 25.63
APPS_TS_INTERFACE 13,177.63 366.38 71.49 28.51
The above works fine, but I would like to further change the query so that only those tablespaces with free space less than 5% (or used space more than 95%) are returned.
I have been working on this all morning and wanted to open it up to the masters!
I have tried using WHERE pct_used > 95 but to no avail.
Any advice would be appreciated.
Many thanks.
10.2.0.4
Linux Red Hat 4.Thanks for that.
What is confusing is that the below query works for every other (about 10 others) database but not this one (?)
SQL> set lines 200 pages 2000
SQL>
SQL> col tablespace_name heading 'Tablespace' format a30 truncate
SQL> col total_maxspace_mb heading 'MB|Max Size' format 9G999G999
SQL> col total_allocspace_mb heading 'MB|Allocated' format 9G999G999
SQL> col used_space_mb heading 'MB|Used' format 9G999G999D99
SQL> col free_space_mb heading 'MB|Free Till Max' like used_space_mb
SQL> col free_space_ext_mb heading 'MB|Free Till Ext' like used_space_mb
SQL> col pct_used heading '%|Used' format 999D99
SQL> col pct_free heading '%|Free' like pct_used
SQL>
SQL> break on report
SQL> compute sum label 'Total Size:' of total_maxspace_mb total_allocspace_mb used_space_mb - free_space_mb (used_space_mb/total_maxspace_mb)*100 on report
SQL>
SQL> select /*+ALL_ROWS */
2 alloc.tablespace_name,
3 alloc.total_maxspace_mb,
4 alloc.total_allocspace_mb,
5 (alloc.total_allocspace_mb - free.free_space_mb) used_space_mb,
6 free.free_space_mb+(alloc.total_maxspace_mb-alloc.total_allocspace_mb) free_space_mb,
7 free.free_space_mb free_space_ext_mb,
8 ((alloc.total_allocspace_mb - free.free_space_mb)/alloc.total_maxspace_mb)*100 pct_used,
9 ((free.free_space_mb+(alloc.total_maxspace_mb-alloc.total_allocspace_mb))/alloc.total_maxspace_mb)*100 pct_free
10 FROM (SELECT tablespace_name,
11 ROUND(SUM(CASE WHEN maxbytes = 0 THEN bytes ELSE maxbytes END)/1048576) total_maxspace_mb,
12 ROUND(SUM(bytes)/1048576) total_allocspace_mb
13 FROM dba_data_files
14 WHERE file_id NOT IN (SELECT FILE# FROM v$recover_file)
15 GROUP BY tablespace_name) alloc,
16 (SELECT tablespace_name,
17 SUM(bytes)/1048576 free_space_mb
18 FROM dba_free_space
19 WHERE file_id NOT IN (SELECT FILE# FROM v$recover_file)
20 GROUP BY tablespace_name) free
21 WHERE alloc.tablespace_name = free.tablespace_name (+)
22 ORDER BY pct_used DESC
23 /
((alloc.total_allocspace_mb - free.free_space_mb)/alloc.total_maxspace_mb)*100 pct_used,
ERROR at line 8:
ORA-01476: divisor is equal to zero -
Help needed for a new Mac user with his photo and video librarys
after swopping my Ericson for an iphone a year ago. then taking delivery of an iPad on launch day. I have finally taken the plunge and swopped my windows pc for a lovely 27" iMac... what have I been doing all these years using windows? using the Mac is a joy.
As I get more involved with their products I am constantly fascinated (and frustrated) by Apples way of doing things. this situation is no exception and I need Apple people with Apple experience to help me make a decision.
I am looking at software to organise my photos and videos. I have approximately 7000 photos residing on a windows home server. in addition to that there are some 200 or so HD videos taken with either my Sanyo Xacti camcorder or Sony DSC W300 camera. both cameras I think produce MPEG format
my first experience was iPhoto. great for the `price' but not very flexible. single libraries, poor editing facilities. and because I have files referenced and not copied to my iPhoto library, if I deleted from the library its not deleted from the server (and vice versa). this leads to images being displayed in the library that don't exist etc.
then I downloaded the Aperture 3 trial. great until I came across the `Unsupported file format' situation with the MPEG videos that strangely enough, iPhoto will recognise and play (the Apple way of doing things).
1. should I put the photos on the Mac instead of the server?
2. Is there a better way of managing the images for deletions etc
3. should I stick with Aperture because of the editing
4. is there a better software or way of managing the videos
sorry for the ramble but its all new to mehad a good session on all three of the recommended programs and now have a better understanding of the excellent advice I have been given.
iPhoto
great `free' program that is easy to use but somewhat limited on features. think I would always be looking to upgrade from here (feels a bit boring)
Aperture
really enjoyed using this but at £170! a hefty price tag considering its limited file support. fortunately its only my camera video clips (MPEG) that it wont view. my camcorder files (MP4) are fine so this is looking better as a one stop solution.
Elements
much more than just a image touch up. considering what it can do.. very good value at around £60. teamed with iPhoto it becomes even more attractive.
My issue still remains with the masters file location.
I would prefer to keep all my media on the external server to ensure I have a fail safe recovery option. its a 4TB 5 disk affair which contains all the family music, video images and DVD's etc (likely to change to a Mac server in the future!)
one feature that I cant find, (even in Aperture which surprises me), is the ability to automatically manage deletions from referenced locations? i.e. delete a file from the server and when aperture opens up the thumbnails are updated. or, delete a referenced master in Aperture and an option comes up to delete the referenced file on the external drive.
how would a professional photographer or studio using Aperture manage this process (or would they use something else?)
forgot to mention that through our student facilities i can get a discounted copy of Photoshop CS5 for around £175 which is close to Aperture. is this a worthwhile option or overkill
Message was edited by: buttons129 -
Vertex help needed on sales order
so, this is the scenario i have:
i have 2 customers, A and B. A is exempt from tax, B is not. I have 2 materials in the line item: x and y. everything works fine with the customers and taxes. but when i go in for one of the line item materials (lets say x) and i change the tax classification to something that i have entered into vertex, where the customer should be taxed, and choose a customer that is exempt from tax, it shows no tax, when the material is suppose to have tax because of the tax classification. any suggestions on what i'm doing wrong?? ty!Hi,
Maintain the condition record as per the combination you need to maintain the tax.
Regards,
Sp.Balaji. -
Little help needed for a poor pc user
Okay, I just started using mac and I'm having a few problems that I cant figure out myself. When I downloaded a codec pack, the computer didnt recognize the .exe file and im just wondering why that is. This isnt the first time I have encountered this problem either. So, I'm just thinking that this is something that happens becuase not everybody makes mac friendly software. Can Someone please make me understand how to fix this problem?
You're not entirely stuck, if you decide to install Windows either along side Mac OS X (Boot Camp Beta) or on a virtual machine (Parallels Desktop).
Yang -
Customize bapi for sales orders with customer fields
The situation is a following.
Customer has added a structure on table VBAP. They used Append but without include.
Now I have to make a BAPI that will be able to write on those fields. I have used several posts and documentation, but with on change : Instead of passing on BAPE_VBAP, BAPE_VBAPX, etc the original structure, i pass another that is an exact copy.
Could this be the source of the fact that i Dont write anything? The return struct contains tthese :
ORDER_HEADER_IN has been processed successfully
return..:
ITEM_IN has been processed successfully
return..:
But nothing is shown on vbapthanks for the reply, but this is something I knew. My problem was whether the structure added to those BAPE_* has to be exaxtly the structure appended to VBAP, or just an identical structure (eg if I appended VBAPEXT to VBAP, should I append VBAPEXT to BAPE_*, or could I use VBAPEXT_copy, which is exactly the same with different name)?
Anyway, we solved the problem and it seems that we can use an identical structure with same name. -
BAPI SALES ORDER CHANGE CUSTOM FIELD
Hello Gurus,
I am trying to update a custom field for sales order. The field name is zfield1 in vbak. The field is added to vbakkom, vbakkomx, vbakkoz, vbakkozx, BAPE_VBAK, BAPE_VBAKX. I guess i need to use the EXTENSIONIN table to populate this field's values to change it with the BAPI. But I not sure exactly how to populate the fields. Can someone tell me how exactly i need to do this?
Also the documentation says fill Extensionin this way.
STRUCTURE VALUEPART1 1234561234567890123
BAPE_VBAP 0000004711000020 XYZ
BAPE_VBAPX 0000004711000020 X
What is the 1234561234567890123? Is it a field?
Thanks,
KBHi,
Check this example..I am updating the VBAP field..Instead you can replace it for VBAK..
PARAMETERS: P_VBELN TYPE VBAK-VBELN.
DATA: T_LINE LIKE BAPISDITM OCCURS 0 WITH HEADER LINE.
DATA: T_LINEX LIKE BAPISDITMX OCCURS 0 WITH HEADER LINE.
DATA: T_EXTEN LIKE BAPIPAREX OCCURS 0 WITH HEADER LINE.
DATA: T_RETURN LIKE BAPIRET2 OCCURS 0 WITH HEADER LINE.
DATA: BAPE_VBAP LIKE BAPE_VBAP.
DATA: BAPE_VBAPX LIKE BAPE_VBAPX.
DATA: ORDER_HEADERX LIKE BAPISDH1X.
ORDER_HEADERX-UPDATEFLAG = 'U'.
T_LINE-ITM_NUMBER = '000010'.
APPEND T_LINE.
T_LINEX-ITM_NUMBER = '000010'.
T_LINEX-UPDATEFLAG = 'U'.
APPEND T_LINEX.
BAPE_VBAP-VBELN = P_VBELN.
BAPE_VBAP-POSNR = '000010'.
<b>BAPE_VBAP-<b>YYFREETEXT</b> = '02'.</b>
T_EXTEN-STRUCTURE = 'BAPE_VBAP'.
T_EXTEN+30 = BAPE_VBAP.
APPEND T_EXTEN.
BAPE_VBAPX-VBELN = P_VBELN.
BAPE_VBAPX-POSNR = '000010'.
<b>BAPE_VBAPX-YYFREETEXT = 'X'.</b>
T_EXTEN-STRUCTURE = 'BAPE_VBAPX'.
T_EXTEN+30 = BAPE_VBAPX.
APPEND T_EXTEN.
CALL FUNCTION 'BAPI_SALESORDER_CHANGE'
EXPORTING
salesdocument = p_vbeln
order_header_inx = ORDER_HEADERX
tables
return = T_RETURN
ORDER_ITEM_IN = T_LINE
ORDER_ITEM_INX = T_LINEX
EXTENSIONIN = T_EXTEN.
COMMIT WORK.
Thanks,
Naren
Maybe you are looking for
-
Can't open Adobe Media Encoder CS6 without crashing
Hi, I tryed recently to export a finished video in Adobe Premiere with Adobe Media Encoder, but unfortunately, until recently, it crashes a few second later when I try to open it. I try to open it without Premiere, same result. I tryed to uninstall t
-
New Mini and old wireless keyboard must be 1" apart?
I just bought a brand new Mac Mini to replace older model of same. My old Apple wireless keyboard (probably 1st gen - the battery compartment is on the bottom) was working fine with old Mini, but with the new one, the keyboard loses connection unless
-
Every time I verify permissions I get the same results: Repairing permissions for "Macintosh HD" Group differs on "private/var/root/Library/Preferences/.GlobalPreferences.plist", should be 0, group is 90. Warning: SUID file "System/Library/CoreServic
-
Reinstall iMac from its factory setting
reinstall iMac from its factory setting please help how do i do it
-
When trying to download, he says my serial number is not correct
Refusal of my serial number