Performance problem in select data from data base
hello all,
could you please suggest me which select statement is good for fetch data form data base if data base contain more than 10 lac records.
i am using SELECT PACKAGE SIZE n statement, but it's taking lot of time .
with best regards
srinivas rathod
Hi Srinivas,
if you have huge data and selecting ,you could decrease little bit time if you use better techniques.
I do not think SELECT PACKAGE SIZE will give good performance
see the below examples :
ABAP Code Samples for Simple Performance Tuning Techniques
1. Query including select and sorting functionality
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on
fmatnr = gmatnr where gstlal = '01' order by fernam.
Code B
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01'.
sort itab_new by ernam.
Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: The Order by clause associated with a select statement increases the execution time of the statement, so it is profitable to sort the internal table once after selecting the data.
2. Performance Improvement Due to Identical Statements Execution Plan
Consider the below queries and their levels of efficiencies is saving the execution
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
sort itab_new.
select fmatnr fernam
fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as
f inner join mast as g on f~matnr =
gmatnr where gstlal
= '01' .
Code D (Identical Select Statements)
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
werks like mast-werks,
aenam like mast-aenam,
stlal like mast-stlal,
end of itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
sort itab_new.
select fmatnr fernam fmtart fmatkl gwerks gaenam g~stlal
into table itab_new from mara as f inner join mast as g on f~matnr =
gmatnr where gstlal = '01' .
Both the above codes essentially do the same function, but the execution time for code B is considerably lesser than that of Code A. Reason: Each SQL statement during the process of execution is converted into a series of database operation phases. In the second phase of conversion (Prepare phase) an execution plan is determined for the current SQL statement and it is stored, if in the program any identical select statement is used, then the same execution plan is reused to save time. So retain the structure of the select statement as the same when it is used more than once in the program.
3. Reducing Parse Time Using Aliasing
A statement which does not have a cached execution plan should be parsed before execution; this parsing phase is a highly time and resource consuming, so parsing time for any sql query must include an alias name in it for the following reason.
1. Providing the alias name will enable the query engine to resolve the tables to which the specified fields belong to.
2. Providing a short alias name, (a single character alias name) is more efficient that providing a big alias name.
Code E
select jmatnr jernam jmtart jmatkl
gwerks gaenam g~stlal into table itab_new from mara as
j inner join mast as g on jmatnr = gmatnr where
g~stlal = '01' .
In the above code the alias name used is j .
4. Performance Tuning Using Order by Clause
If in a SQL query you are going to read a particular database record based on some key values mentioned in the select statement, then the read query can be very well optimized by ordering the fields in the same order in which we are going to read them in the read query.
Code F
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
end of itab_new.
select MATNR ERNAM MTART MATKL from mara into table itab_new where
MTART = 'HAWA' ORDER BY MATNR ERNAM MTART MATKL.
read table itab_new with key MATNR = 'PAINT1' ERNAM = 'RAMANUM'
MTART = 'HAWA' MATKL = 'OFFICE'.
Code G
tables: mara, mast.
data: begin of itab_new occurs 0,
matnr like mara-matnr,
ernam like mara-ernam,
mtart like mara-mtart,
matkl like mara-matkl,
end of itab_new.
select MATNR ERNAM MTART MATKL from mara into table itab_new where
MTART = 'HAWA' ORDER BY ERNAM MATKL MATNR MTART.
read table itab_new with key MATNR = 'PAINT1' ERNAM = 'RAMANUM'
MTART = 'HAWA' MATKL = 'OFFICE'.
In the above code F, the read statement following the select statement is having the order of the keys as MATNR, ERNAM, MTART, MATKL. So it is less time intensive if the internal table is ordered in the same order as that of the keys in the read statement.
5. Performance Tuning Using Binary Search
A very simple but useful method of fine tuning performance of a read statement is using Binary search addition to it. If the internal table consists of more than 20 entries then the traditional linear search method proves to be more time intensive.
Code H
select * from mara into corresponding fields of table intab.
sort intab.
read table intab with key matnr = '11530' binary search.
Code I
select * from mara into corresponding fields of table intab.
sort intab.
read table intab with key matnr = '11530'.
Thanks
Seshu
Similar Messages
-
Performance problem with selecting records from BSEG and KONV
Hi,
I am having performance problem while selecting records from BSEG and KONV table. As these two tables have large amount of data , they are taking lot of time . Can anyone help me in improving the performance . Thanks in advance .
Regards,
PrashantHi,
Some steps to improve performance
SOME STEPS USED TO IMPROVE UR PERFORMANCE:
1. Avoid using SELECT...ENDSELECT... construct and use SELECT ... INTO TABLE.
2. Use WHERE clause in your SELECT statement to restrict the volume of data retrieved.
3. Design your Query to Use as much index fields as possible from left to right in your WHERE statement
4. Use FOR ALL ENTRIES in your SELECT statement to retrieve the matching records at one shot.
5. Avoid using nested SELECT statement SELECT within LOOPs.
6. Avoid using INTO CORRESPONDING FIELDS OF TABLE. Instead use INTO TABLE.
7. Avoid using SELECT * and Select only the required fields from the table.
8. Avoid nested loops when working with large internal tables.
9. Use assign instead of into in LOOPs for table types with large work areas
10. When in doubt call transaction SE30 and use the examples and check your code
11. Whenever using READ TABLE use BINARY SEARCH addition to speed up the search. Be sure to sort the internal table before binary search. This is a general thumb rule but typically if you are sure that the data in internal table is less than 200 entries you need not do SORT and use BINARY SEARCH since this is an overhead in performance.
12. Use "CHECK" instead of IF/ENDIF whenever possible.
13. Use "CASE" instead of IF/ENDIF whenever possible.
14. Use "MOVE" with individual variable/field moves instead of "MOVE-
CORRESPONDING" creates more coding but is more effcient. -
Performance problem whlile selecting(extracting the data)
i have one intermediate table.
iam inserting the rows which are derived from a select statement
The select statement having a where clause which joins a view (created by 5 tables)
The problem is select statement which is getting the data is taking more time
i identified the problems like this
1) The view which is using in the select statement is not indexed---is index is necessary on view ????
2) Because the tables which are used to create a view have already properly indexed
3) while extracting the data it is taking the more time
the below query will extract the data and insert the data in the intermediate table
SELECT 1414 report_time,
2 dt_q,
1 hirearchy_no_q,
p.unique_security_c,
p.source_code_c,
p.customer_specific_security_c user_security_c,
p.par_value par_value, exchange_code_c,
(CASE WHEN p.ASK_PRICE_L IS NOT NULL THEN 1
WHEN p.BID_PRICE_L IS NOT NULL THEN 1
WHEN p.STRIKE_PRICE_L IS NOT NULL THEN 1
WHEN p.VALUATION_PRICE_L IS NOT NULL THEN 1 ELSE 0 END) bill_status,
p.CLASS_C AS CLASS,
p.SUBCLASS_C AS SUBCLASS,
p.AGENT_ADDRESS_LINE1_T AS AGENTADDRESSLINE1,
p.AGENT_ADDRESS_LINE2_T AS AGENTADDRESSLINE2,
p.AGENT_CODE1_T AS AGENTCODE1,
p.AGENT_CODE2_T AS AGENTCODE2,
p.AGENT_NAME_LINE1_T AS AGENTNAMELINE1,
p.AGENT_NAME_LINE2_T AS AGENTNAMELINE2,
p.ASK_PRICE_L AS ASKPRICE,
p.ASK_PRICE_DATE_D AS ASKPRICEDATE,
p.ASSET_CLASS_T AS ASSETCLASS
FROM (SELECT
DISTINCT x.*,m.customer_specific_security_c,m.par_value
FROM
HOLDING_M m JOIN ED_DVTKQS_V x ON
m.unique_security_c = x.unique_security_c AND
m.customer_c = 'CONF100005' AND
m.portfolio_c = 24 AND
m.status_c = 1
WHERE exists
(SELECT 1 FROM ED_DVTKQS_V y
WHERE x.unique_security_c = y.unique_security_c
GROUP BY y.unique_security_c
HAVING MAX(y.trading_volume_l) = x.trading_volume_l)) p
any one please give me the valueble suggestions on the performancethanks for the updating
in the select query we used some functions like max
(SELECT 1 FROM ED_DVTKQS_V y
WHERE x.unique_security_c = y.unique_security_c
GROUP BY y.unique_security_c
HAVING MAX(y.trading_volume_l) = x.trading_volume_l)) p
will these type of functions will cause the performance problem ??? -
Performance problem in loading the Mater data attributes 0Equipment_attr
Hi Experts,
We have a Performance problem in loading the Mater data attributes 0Equipment_attr.It is running with psuedo delta(full update) the same infopakage runs with diffrent selections.The problme we are facing is the load is running 2 to 4 hrs in the US morning times and when coming to US night times it is loading for 12-22 hrs and gettin sucessfulluy finished. Even it pulls (less records which are ok )
when i checked the R/3 side job log(SM37) the job is running late too. it shows the first and second i- docs coming in less time and the next 3and 4 i- docs comes after 5-7 hrs gap to BW and saving in to PSA and then going to info object.
we have userexits for the data source and abap routines but thay are running fine in less time and the code is not much complex too.
can you please explain and suggest the steps in r/3 side and bw side. how can i can fix this peformance issue
Thanks,
dpHi,
check this link for data load performance. Under "Extraction Performance" you will find many useful hints.
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
Regards
Andreas -
Problems While Extracting Hours From Date Field
Hi Guys,
Hope you are doing well.
I am facing some problems while extracting hours from date field. Below is an example of my orders table:-
select * from orders;
Order_NO Arrival Time Product Name
1 20-NOV-10 10:10:00 AM Desktop
2 21-NOV-10 17:26:34 PM Laptop
3 22-JAN-11 08:10:00 AM Printer
Earlier there was a requirement that daily how many orders are taking place in the order's table, In that I used to write a query
arrival_time>=trunc((sysdate-1),'DD')
and arrival_time<trunc((sysdate),'DD')
The above query gives me yesterday how many orders have been taken place.
Now I have new requirement to generate a report on every 4 hours how many orders will take place. For an example if current time is 8.00 AM IST then the query should fetch from 4.00 AM till 8 AM how many orders taken place. The report will run next at 12.00 PM IST which will give me order took place from 8.00 AM till 12.00 PM.
The report will run at every 4 hours a day and generate report of orders taken place of last 4 hours. I have a scheduler which will run this query every hours, but how to make the query understand to fetch order details which arrived last 4 hours. I am not able to achieve this using trunc.
Can you please assist me how to make this happen. I have checked "Extract" also but I am not satisfied.
Please help.
Thanks In Advance
Arijityou may try something like
with testdata as (
select sysdate - level/24 t from dual
connect by level <11
select
to_char(sysdate, 'DD-MM-YYYY HH24:MI:SS') s
, to_char(t, 'DD-MM-YYYY HH24:MI:SS') t from testdata
where
t >= trunc(sysdate, 'HH') - numtodsinterval(4, 'HOUR')
S T
19-06-2012 16:08:21 19-06-2012 15:08:21
19-06-2012 16:08:21 19-06-2012 14:08:21
19-06-2012 16:08:21 19-06-2012 13:08:21
19-06-2012 16:08:21 19-06-2012 12:08:21trunc ( ,'HH') truncates the minutes and seconds from the date.
Extract hour works only on timestamps
regards
Edited by: chris227 on 19.06.2012 14:13 -
When to refresh Servlet data from Data Base
Hello all,
I have a servlet that retrive few hundreds thousands records from data base table.
The data in data base table being updated once or twice in every week.
Since same servlet instance serve all users, that access the servlet many times a day.
I would like to avoid retriving the data from data base on each servlet access.
and make the users use same data already retrieved and kept in servlet members.
First, what is the best way to avoid data retrive from data base on each servlet access?
and how could I have some kind of trigger that will refresh servlet data from data base every few days?
Thanks in advance for every idea.
AmiJava_A wrote:
Thanks Saish for your reply.
I'm not using DAO in my application but retrive the data from BI data base using a web service. response time querying the BI data base is not quick enuogh.
Since, I wouldn't want to query the BI server on each servlet access.
Because the data I retrived at the begining using the web service contains all required data for all servlet requests, I thought to store the data (~200K rows) once in the servlet which will be using for all requests.
Why not store the results locally in your own database after you fetch them?
This still leave me with the questions: in which event should I query the BI data, and also when or in which event should I update the data again from BI server?
Query at startup, an user demand, when data becomes stale. It depends on your requirements.
>
Thanks
Ami- Saish -
help me..urgent..
how set query to select year from date by using expression where.
ex: select sum(salary)
from table
where year(date)=2007
group by name;
just something like that..help me.plezz..select *
from (
select 1 as rn, to_date('09052007','ddmmyyyy') as dt from dual union all
select 1 as rn, to_date('09052006','ddmmyyyy') as dt from dual
where to_char(dt,'yyyy') = '2007'; or
select *
from (
select 1 as rn, to_date('09052007','ddmmyyyy') as dt from dual union all
select 1 as rn, to_date('09052006','ddmmyyyy') as dt from dual
where extract(year from dt) = 2007; -
Problem while selecting BELNR from BSEG
Hi Experts,
I have a report performance problem while fetching BELNR from BSEG table.
I have to print latest BELNR from BSEG where BUZID = M but at the time of execution of report, It is taking too much time (More that hour and sometimes it gets hanged).
I have also gone through the comments provided by experts for previous problems asked in this forum e.g. BSEG is a cluster table that is why data retrieval takes long time etc.
Can any one has any other idea or suggestion or any other way to solve this problem
Regards,
NeerajHi,
1) Try to create an index on BUZID field
2) Don't use SELECT/ENDSELECT statement. Instead of that extract all the concerned entries from BSEG into an internal table :
select belnr from bseg appending table itab where buzid = 'M'.
then do this :
sort itab by belnr.
describe itab lines n.
read table itab index n.
Please reward if helpful.
Regards,
Nicolas. -
Unable to access the data from Data Management Gateway: Query timeout expired
Hi,
Since 2-3 days the data refresh is failing on our PowerBI site. I checked below:
1. The gateway is in running status.
2. Data source is also in ready status and test connection worked fine too.
3. Below is the error in System Health -
Failed to refresh the data source. An internal service error has occurred. Retry the operation at a later time. If the problem persists, contact Microsoft support for further assistance.
Error code: 4025
4. Below is the error in Event Viewer.
Unable to access the data from Data Management Gateway: Query timeout expired. Please check 1) whether the data source is available 2) whether the gateway on-premises service is running using Windows Event Logs.
5. This is the correlational id for latest refresh failure
is
f9030dd8-af4c-4225-8674-50ce85a770d0
6.
Refresh History error is –
Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The operation has timed out. Errors in the high-level relational engine. The following exception occurred while the
managed IDataReader interface was being used: Query timeout expired.
Any idea what could have went wrong suddenly, everything was working fine from last 1 month.
Thanks,
RichaNever mind, figured out there was a lock on SQL table which caused all the problems. Once I released the lock it PowerPivot refresh started working fine.
Thanks. -
Error while extracting data from data source 0RT_PA_TRAN_CONTROL, in RSA7
Hi Gurs,
I'm getting the below error while extracting data from data source 0RT_PA_TRAN_CONTROL, in RSA7. (Actullly this is IS Retail datasource used to push POSDM data into BI cubes)
The error is:
Update mode "Full Upload" is not supported by the extraction API
Message no. R3011
Diagnosis
The application program for the extraction of the data was called using update mode "Full Upload". However, this is not supported by the InfoSource.
System Response
The data extraction is terminated.
Procedure
Check for relevant OSS Notes, or send a problem message of your own.
Your help in this regd. would be highly appreciated.
Thanks,
David.Hi David,
I have no experience with IS Retail data sources. But as message clearly say this DS is not suppose to be ran in Full mode.
Try to switch you DTPs/Infopackages to Delta mode.
While to checking extraction in source system, within TA RSA3 = Extractor checker, kindly switch Update mode field to Delta.
BR
m./ -
Runtime error when Transfering data from data object to a file
Hi everybody,
I'm having a problem when I transfer data from data object to file. The codes like following :
data : full_path(128).
OPEN DATASET full_path FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
and transfer data from flat structure to this file full_path
move: tab to c_output-tab_5,
tab to c_output-tab_4,
tab to c_output-tab_3.
transfer c_output to full_path. // Error Line
The detail error like the following:
For the statement
"TRANSFER f TO ..."
only character-type data objects are supported at the argument position
"f".
In this case. the operand "f" has the non-character-type "u". The
current program is a Unicode program. In the Unicode context, the type
'X' or structures containing not only character-type components are
regarded as non-character-type.
transfer c_output to full_path. " Line error
Please help me to fix this issue !
Thank you in advance !
Edited by: Hai Nguyen on Mar 4, 2009 10:55 AMHi Mickey,
Thanks for your answer,
I found out that the structure c_output have the field with data type X. I know that the cause of the issue.
begin of c_output,
vbeln(10),
tab_5 like tab,
posnr(6),
tab_4 like tab,
topmat(18),
tab_3 like tab,
end of c_output.
data : tab type X value 9.
Could you tell me how to fix it ? What I have to do in this situation ?
Thank you very much ! -
Delete Transaction Data from date to date
Hi All,
we want to delete transactional data from date to to date
is there any way to delete data from date to todate?
We are aware of following tcodes
OBR1- Reset transaction data
CXDL - Delete transaction data from ledger
But there is no period/from date to date option available
Example:
we are in 2010 now we want to delete data from 2005- 2007 and we don't want to archive
Thanks in advance
Regards,
MSHi Eli,
Thanks for the reply,
Yes, you are right its not right to delete data based on the period... but we have such kind of typical scenario
Let me get some other opinion
Regards,
MS -
Error while loading Reported Financial Data from Data Stream
Hi Guys,
I'm facing the following error while loading Reported Financial Data from Data Stream:
Message no. UCD1003: Item "Blank" is not defined in Cons Chart of Accts 01
The message appears in Target Data. Item is not filled in almost 50% of the target data records and the error message appears.
Upon deeper analysis I found that Some Items are defined with Dr./Cr. sign of + and with no breakdown. When these items appear as negative (Cr.) in the Source Data, they are not properly loaded to the target data. Item is not filled up, hence causing the error.
For Example: Item "114190 - Prepayments" is defined with + Debit/Credit Sign. When it is posted as negative / Credit in the source data, it is not properly written to the target.
Should I need to define any breakdown category for these items? I think there's something wrong with the Item definitions OR I'm missing something....
I would highly appreciate your quick assistance in this.
Kind regards,
AmirFound the answer with OSS Note: 642591.....
Thanks -
UNIQUE Problem in pulling DATA from DATA base table to internal table
Dear Experts,
I am new to ABAP. I have a very basic question but looks a quite puzzling one to me. Hemnce I am posting it here.
I am facing an unique problem in pulling data from database table and populating that data into internal table for further use.
The data in the database table "Zlt_mita" with fields M1 (Employee Name, Type: Char20) and M2 (Employee Code, Type Char7) are:
Plz refer the screenshot in the attached file:
My Code:
1) When I try to pull data from Dbase table by taking M2 as parameter.
This code is succcessful and I am able to populate data in internal table it_dat.
TYPES: Begin Of ty_DAT,
M1 TYPE Zlt_mita-M1,
M2 TYPE ZLT_mita-M2,
END OF ty_DAT.
DATA: it_dat TYPE STANDARD TABLE OF ty_dat with header line,
wa_dat TYPE ty_dat.
PARAMETERS: p_mitar TYPE Zlt_Mita-M2.
SELECT M1
M2
FROM ZLt_mita
INTO TABLE it_dat
Where M2 = p_mitar.
Loop at it_dat into wa_dat.
WRITE:/2 wa_dat-M1,
10 wa_dat-M2.
ENDLOOP.
2) When I try to pull data from Dbase table by taking M1 as parameter.
This code is NOT succcessful and I am NOT able to populate data in internal table it_dat.
TYPES: Begin Of ty_DAT,
M1 TYPE Zlt_mita-M1,
M2 TYPE ZLT_mita-M2,
END OF ty_DAT.
DATA: it_dat TYPE STANDARD TABLE OF ty_dat with header line,
wa_dat TYPE ty_dat.
PARAMETERS: P_Mita TYPE ZLT_Mita-M1.
SELECT M1
M2
FROM ZLt_mita
INTO TABLE it_dat
Where M1 = P_Mita.
Loop at it_dat into wa_dat.
WRITE:/2 wa_dat-M1,
10 wa_dat-M2.
ENDLOOP.
Why is this happening when both M1 and M2 are Type Character fields.
Looking forward for your replies.
Regards
Chandan KumarHi Chandan ,
Database fetch is case sensitive ,So u need to give exact format in where condition.
Make your parameter and database in same case so that you need not worry about case sensitivity .
Check the lowecase check box in the domain .
Then declare your parameter
PARAMETERS:
P_Mita
TYPE ZLT_Mita-M1 LOWER CASE .
You can do the vice versa also by unchecking lowercase and giving Upper case instead of lower in parameter declartion .
Regards ,
Juneed Manha -
Only Select date from date picker
Hi All,
How can I disable typing date in field text (Choose Date Component)? On other words the user can just select from date picker .
Thanks
Message was edited by:
user638709Thank you for replying.
Actually, I tried to plug the java script in the JSP page but still I did not manage to disable user input. I followed these step:
1. Add the script
<script type="text/javascript">
function filterInputComponent(){
var component = document.getElementById("inputDate1");
component.setAttribute("style","background-color:#ebe9e9");
component.onfocus = function(evt){
var _lovButton = component.nextSibling.nextSibling.nextSibling.nextSibling;
_lovButton.focus();
</script>
2.Define Id for selectIputDate component as "inputDate1"
3.Change onload - body property to be "filterInputComponent"
What might be the problem? Did I miss some steps? Or?
Thanks
Maybe you are looking for
-
Hi... I really need some help from you guys. My ipod touch is not responding whenever I connect it to my PC. It gives no feedback. The battery image that is shown when I charge my ipod touch doesn't show up unlike if I charge it using the wall charge
-
Hi, In PSCD, when you post an incoming payment (transaction FP05), do you have to have FM account assignment for Cash line item? We recently upgraded our system from 4.71 to ECC 6.0. In 4.71, when we post incoming payment, there is no FM account assi
-
HOW TO RESTORE A DISABLED PHONE
i recently locked my iphone and forgot password. I have disabled my phone and am away from home and can only access itunes from a different computer. Does anyone know how to restore iphone 4 from a new computer? When i plug in it gives me error messa
-
Hello, I am using the follwoing code to import a journal to GL: INSERT INTO GL_INTERFACE (STATUS, SET_OF_BOOKS_ID, USER_JE_SOURCE_NAME, USER_JE_CATEGORY_NAME, ACCOUNTING_DATE, CURRENCY_CODE, DATE_CREATED, CREATED_BY, ACTUAL_FLAG, CURRE
-
When you first set up an alarm, the default "15 minutes before" appears. The problem is that the program assumes you are too stupid to remember to put in some number for the alert time. So, it does not allow you to erase both of the displayed digits,