Change severity level of validation messages
hi,
i was wondering whether there's an easy way to change the severity level of messages generated by framework validation.
for instance i use something like <f:validateDoubleRange minimum="0" maximum="200" /> and
<h:messages layout="table" errorClass="error_font" fatalClass="error_font" infoClass="info_font"/>.
When entering a number bigger than 200 a SEVERITY_INFO is generated. As this Severity level is reserved for positive outcome of the operation this is quite misleading to the user.
are there any workarounds or is it just not common practice to use h:messages for anything else than validation?
Sounds like a bug. This have to be an error rather than an info.
I don't have the environment to test it right now, but can you tell which JSF version you're using and if this problem will be solved if you install the latest from [http://javaserverfaces.dev.java.net] (which is currently 1.2_12)? Can you also tell which appserver implementation/version you're using?
Similar Messages
-
How to display Entity level validation messages in a table
I'm using ADF 11g 11.1.1.2.0. I have a page with an updateable table based on an EO. In the EO I have defined some entity level and attribute level declarative business rules. When the user enters a value in the table that violates an attribute level validation, the related message is displayed inline and the attribute is surrounded by a red box. Furthermore the user is not able to place his cursor in a different row. In this way it is clear which row and attribute caused the error. However if the user enters some data that violates an entity level validation, the validation message is displayed as global messages in the message-popup when the business rule is based on a script or a method. This means that the user gets a global message popup, and does not know which row caused the error. Ofcourse we can include the identifying attributes of the row in the error message, but I would like to know if there is another more visual way to communicate to the user which row caused the error.
Edited by: Jonas de Graaff (PSB) on 10-feb-2010 2:51Hi Chittibabu,
what about using a TreeTable control?
SAPUI5 SDK - Demo Kit
Regards
Tobias -
How can i change validator Messages
i am newbie in JSFl i just start now
i am making a very simple login application
here i do a validator on text field.
validation message show me as form1:txtid: Validation Error: Value is required.i want to change these message like ID is required.how can i do this??thanks shakeel bahi.
it is only work for required property. am i right?
how i change message on length validator?
for example minimum length is 2 and maximum is 10
error message show as
for minimum
form1:txtID validation error minmum length 2for maximum
form1:txtID validation error maximum length 10here i want to replace my messages like
User Id length Should be between 2 to 10thanks again for your efforts -
Change LightSwitch default validation message
Hi,
I have two tables 1. State, 2.District
1.State- StateId-Pk, StateName
2.District- DistrictId-Pk, DistrictName, StateId-Fk
Now I have created a AddEditScreen for District. As the two tables are in relationship the AddEditScreen is showing 3 fields 1.DistrictName, 2. StateName, 3.StateId. The StateName field is DetailsPicker. I have removed the StateId field from the
display page. Now when I'm clicking on save button it is giving a error message
"StateId: This field is required.". I don't want to show the StateId field. I only want to show 1.DistrictName, 2. StateName field. Now how to change the default validation message to "State Name: This field
is required." if anyone doesn't select a state name.
Please help me.
Thanks in advance.Hello
The screenshot you have attached is not the screen designer, this is the data designer (for want of a better word for it). The scren designer is where you add the data objects onto the screen. I have attached a screenshot below of what you are looking for.
I know that this is of an older version of Visual Studio but the principal is still the same.
AAs you can see i Have selected the Name field within VeryFullTables and then selected the small arrow next to write code. Then you select the _Validate method. This will allow you to enter some simple code that will show on the validation message.
If you found this post helpful, please mark it as helpful. If by some chance I answered the question, please mark the question as answered. That way you will help more people like me :) -
Want to change length validator message and change field format.
I want to override Length Validator Message as I want to tell the user that phone number
must contain 10 numbers
I've added this validator on the phone field
af:validateLength minimum="10" maximum="10"
hintMinimum="11" hintMaximum="22"
hintNotInRange="33" messageDetailMinimum="44"
messageDetailMaximum="55"
messageDetailNotInRange="66" id="lengthval"/>and this when I enter less than 10 numbers the error message tell that you must enter 10 characters
I want to override this message to be "you must enter 10 numbers"
And I want to make this number format like
XXXX,XXX,XX
How can I make this.Strictly speaking a number in this format is no number. It's a phone number and you should use a regular expression validator for this.
Check http://docs.oracle.com/cd/E15051_01/apirefs.1111/e12419/tagdoc/af_validateRegExp.html for more info.
Timo -
Dont allow to change item level data in sales order.
Hi all,
I have a requirement in which, users should not be allowed to change item level data or not allowed to add any new items in sales order on a certain condition. But they should be allowed to change the header level data.
How can i achieve this.
Can anyone help me?Hi,
Check below exit.
MV45AFZZ and in form USEREXIT_MOVE_FIELD_TO_VBAP.
Here check for ur validation, If passes then CHECK variable SVBAP-TABIX. If it is 0 then item is created. If it is GT 0 then item is changed. Other way could be...
select data from VBAP for each sales document and item in xvbpa internal table.
If for any item u don't have data in VBAP table that means u r adding that item. So issue error message.
* FORM USEREXIT_MOVE_FIELD_TO_VBAP *
* This userexit can be used to move some fields into the sales *
* dokument item workaerea VBAP *
* SVBAP-TABIX = 0: Create item *
* SVBAP-TABIX > 0: Change item *
* This form is called at the end of form VBAP_FUELLEN. *
Thanks,
Vinod. -
Web Form Validation Message Language Setting at Runtime when work in multi lingual environment
Business Catalyst use the default culture language to display web form validation message.
When we are in multi lingual environment and not using subdoamin to handle multilingual sites, we found that the validation message did appear in the default culture setting. To make this work, we need to add the below script in our template.
<script type="text/javascript">
$(document).ready(function(){
var head= document.getElementsByTagName('head')[0];
var script= document.createElement('script');
script.src= '/BcJsLang/ValidationFunctions.aspx?lang=FR';
script.charset = 'utf-8';
script.type= 'text/javascript';
head.appendChild(script);
</script>
Assuming the template is in french. You can change the lang parameter in the script according to your language.After user 1 submits the page, it might not even be committed, so there is no way to have the pending data from user1 seen by user2.
However, we do have a new feature in ADF 11g TP4 that I plan to blog more about called Auto-Refresh view objects. This feature allows a view object instance in a shared application module to refresh its data when it receives the Oracle 11g database change notification that a row that would affect the results of the query has been changed.
The minimum requirements in 11g TP4 to experiment with this feature which I just tested are the following:
1. Must use Database 11g
2. Database must have its COMPATIBLE parameter set to '11.0.0.0.0' at least
3. Set the "AutoRefresh" property of the VO to true (on the Tuning panel)
4. Add an instance of that VO to an application module (e.g. LOVModule)
5. Configure that LOVModule as an application-level shared AM in the project properties
6. Define an LOV based on a view accessor that references the shared AM's VO instance
7. DBA must have performed a 'GRANT CHANGE NOTIFICATION TO YOURUSER'
8. Build an ADF Form for the VO that defined the LOV above and run the web page
9. In SQLPlus, go modify a row of the table on which the shared AM VO is based and commit
When the Database delivers the change notification, the shared AM VO instance will requery itself.
However that notification does not arrive all the way out to the web page, so you won't see the change until the next time you repaint the list.
Perhaps there is some way to take it even farther with the active data feature PaKo mentions, but I'm not familiar enough with that myself to say whether it would work for you hear. -
Changing Isolation Level Mid-Transaction
Hi,
I have a SS bean which, within a single container managed transaction, makes numerous
database accesses. Under high load, we start having serious contention issues
on our MS SQL server database. In order to reduce these issues, I would like
to reduce my isolation requirements in some of the steps of the transaction.
To my knowledge, there are two ways to achieve this: a) specify isolation at the
connection level, or b) use locking hints such as NOLOCK or ROWLOCK in the SQL
statements. My questions are:
1) If all db access is done within a single tx, can the isolation level be changed
back and forth?
2) Is it best to set the isolation level at the JDBC level or to use the MS SQL
locking hints?
Is there any other solution I'm missing?
Thanks,
SebastienGalen Boyer wrote:
On Sun, 28 Mar 2004, [email protected] wrote:
Galen Boyer wrote:
On Wed, 24 Mar 2004, [email protected] wrote:
Oracle's serializable isolation level doesn't offer what most
customers I've seen expect it to offer. They typically expect
that a serializable transaction will block any read-data from
being altered during the transaction, and oracle doesn't do
that.I haven't implemented WEB systems that employ anything but
the default concurrency control, because a web transaction is
usually very long running and therefore holding a connection
open during its life is unscalable. But, your statement did
make me curious. I tried a quick test case. IN ONE SQLPLUS
SESSION: SQL> alter session set isolation_level =
serializable; SQL> select * from t1; ID FL ---------- -- 1 AA
2 BB 3 CC NOW, IN ANOTHER SQLPLUS SESSION: SQL> update t1 set
fld = 'YY' where id = 1; 1 row updated. SQL> commit; Commit
complete. Now, back to the previous session. SQL> select *
from t1; ID FL ---------- -- 1 AA 2 BB 3 CC So, your
statement is incorrect.Hi, and thank you for the diligence to explore. No, actually
you proved my point. If you did that with SQLServer or Sybase,
your second session's update would have blocked until you
committed your first session's transaction. Yes, but this doesn't have anything to do with serializable.
This is the weak behaviour of those systems that say writers can
block readers.Weak or strong, depending on the customer point of view. It does guarantee
that the locking tx can continue, and read the real data, and eventually change
it, if necessary without fear of blockage by another tx etc.
In your example, you were able to change and commit the real
data out from under the first, serializable transaction. The
reason why your first transaction is still able to 'see the old
value' after the second tx committed, is not because it's
really the truth (else why did oracle allow you to commit the
other session?). What you're seeing in the first transaction's
repeat read is an obsolete copy of the data that the DBMS
made when you first read it. Yes, this is true.
Oracle copied that data at that time into the per-table,
statically defined space that Tom spoke about. Until you commit
that first transaction, some other session could drop the whole
table and you'd never know it.This is incorrect.Thanks. Point taken. It is true that you could have done a complete delete
of all rows in the table though..., correct?
That's the fast-and-loose way oracle implements
repeatable-read! My point is that almost everyone trying to
serialize transactions wants the real data not to
change. Okay, then you have to lock whatever you read, completely.
SELECT FOR UPDATE will do this for your customers, but
serializable won't. Is this the standard definition of
serializable of just customer expectation of it? AFAIU,
serializable protects you from overriding already committed
data.The definition of serializable is loose enough to allow
oracle's implementation, but non-changing relevant data is
a typically understood hope for serializable. Serializable
transactions typically involve reading and writing *only
already committed data*. Only DIRTY_READ allows any access to
pre-committed data. The point is that people assume that a
serializable transaction will not have any of it's data re
committed, ie: altered by some other tx, during the serializable
tx.
Oracle's rationale for allowing your example is the semantic
arguement that in spite of the fact that your first transaction
started first, and could continue indefinitely assuming it was
still reading AA, BB, CC from that table, because even though
the second transaction started later, the two transactions *so
far*, could have been serialized. I believe they rationalize it by saying that the state of the
data at the time the transaction started is the state throughout
the transaction.Yes, but the customer assumes that the data is the data. The customer
typically has no interest in a copy of the data staying the same
throughout the transaction.
Ie: If the second tx had started after your first had
committed, everything would have been the same. This is true!
However, depending on what your first tx goes on to do,
depending on what assumptions it makes about the supposedly
still current contents of that table, it may ether be wrong, or
eventually do something that makes the two transactions
inconsistent so they couldn't have been serialized. It is only
at this later point that the first long-running transaction
will be told "Oooops. This tx could not be serialized. Please
start all over again". Other DBMSes will completely prevent
that from happening. Their value is that when you say 'commit',
there is almost no possibility of the commit failing. But this isn't the argument against Oracle. The unable to
serialize doesn't happen at commit, it happens at write of
already changed data. You don't have to wait until issuing
commit, you just have to wait until you update the row already
changed. But, yes, that can be longer than you might wish it to
be. True. Unfortunately the typical application writer logic may
do stuff which never changes the read data directly, but makes
changes that are implicitly valid only when the read data is
as it was read. Sometimes the logic is conditional so it may never
write anything, but may depend on that read data staying the same.
The issue is that some logic wants truely serialized transactions,
which block each other on entry to the transaction, and with
lots of DBMSes, the serializable isolation level allows the
serialization to start with a read. Oracle provides "FOR UPDATE"
which can supply this. It is just that most people don't know
they need it.
With Oracle and serializable, 'you pay your money and take your
chances'. You don't lose your money, but you may lose a lot of
time because of the deferred checking of serializable
guarantees.
Other than that, the clunky way that oracle saves temporary
transaction-bookkeeping data in statically- defined per-table
space causes odd problems we have to explain, such as when a
complicated query requires more of this memory than has been
alloted to the table(s) the DBMS will throw an exception
saying it can't serialize the transaction. This can occur even
if there is only one user logged into the DBMS.This one I thought was probably solved by database settings,
so I did a quick search, and Tom Kyte was the first link I
clicked and he seems to have dealt with this issue before.
http://tinyurl.com/3xcb7 HE WRITES: serializable will give you
repeatable read. Make sure you test lots with this, playing
with the initrans on the objects to avoid the "cannot
serialize access" errors you will get otherwise (in other
databases, you will get "deadlocks", in Oracle "cannot
serialize access") I would bet working with some DBAs, you
could have gotten past the issues your client was having as
you described above.Oh, yes, the workaround every time this occurs with another
customer is to have them bump up the amount of that
statically-defined memory. Yes, this is what I'm saying.
This could be avoided if oracle implemented a dynamically
self-adjusting DBMS-wide pool of short-term memory, or used
more complex actual transaction logging. ? I think you are discounting just how complex their logging
is. Well, it's not the logging that is too complicated, but rather
too simple. The logging is just an alternative source of memory
to use for intra-transaction bookkeeping. I'm just criticising
the too-simpleminded fixed-per-table scratch memory for stale-
read-data-fake-repeatable-read stuff. Clearly they could grow and
release memory as needed for this.
This issue is more just a weakness in oracle, rather than a
deception, except that the error message becomes
laughable/puzzling that the DBMS "cannot serialize a
transaction" when there are no other transactions going on.Okay, the error message isn't all that great for this situation.
I'm sure there are all sorts of cases where other DBMS's have
laughable error messages. Have you submitted a TAR?Yes. Long ago! No one was interested in splitting the current
message into two alternative messages:
"This transaction has just become unserializable because
of data changes we allowed some other transaction to do"
or
"We ran out of a fixed amount of scratch memory we associated
with table XYZ during your transaction. There were no other
related transactions (or maybe even users of the DBMS) at this
time, so all you need to do to succeed in future is to have
your DBA reconfigure this scratch memory to accomodate as much
as we may need for this or any future transaction."
I am definitely not an Oracle expert. If you can describe for
me any application design that would benefit from Oracle's
implementation of serializable isolation level, I'd be
grateful. There may well be such.As I've said, I've been doing web apps for awhile now, and
I'm not sure these lend themselves to that isolation level.
Most web "transactions" involve client think-time which would
mean holding a database connection, which would be the death
of a web app.Oh absolutely. No transaction, even at default isolation,
should involve human time if you want a generically scaleable
system. But even with a to-think-time transaction, there is
definitely cases where read-data are required to stay as-is for
the duration. Typically DBMSes ensure this during
repeatable-read and serializable isolation levels. For those
demanding in-the-know customers, oracle provided the select
"FOR UPDATE" workaround.Yep. I concur here. I just think you are singing the praises of
other DBMS's, because of the way they implement serializable,
when their implementations are really based on something that the
Oracle corp believes is a fundamental weakness in their
architecture, "Writers block readers". In Oracle, this never
happens, and is probably one of the biggest reasons it is as
world-class as it is, but then its behaviour on serializable
makes you resort to SELECT FOR UPDATE. For me, the trade-off is
easily accepted.Well, yes and no. Other DBMSes certainly have their share of faults.
I am not critical only of oracle. If one starts with Oracle, and
works from the start with their performance arcthitecture, you can
certainly do well. I am only commenting on the common assumptions
of migrators to oracle from many other DBMSes, who typically share
assumptions of transactional integrity of read-data, and are surprised.
If you know Oracle, you can (mostly) do everything, and well. It is
not fundamentally worse, just different than most others. I have had
major beefs about the oracle approach. For years, there was TAR about
oracle's serializable isolation level *silently allowing partial
transactions to commit*. This had to do with tx's that inserted a row,
then updated it, all in the one tx. If you were just lucky enough
to have the insert cause a page split in the index, the DBMS would
use the old pre-split page to find the newly-inserted row for the
update, and needless to say, wouldn't find it, so the update merrily
updated zero rows! The support guy I talked to once said the developers
wouldn't fix it "because it'd be hard". The bug request was marked
internally as "must fix next release" and oracle updated this record
for 4 successive releases to set the "next release" field to the next
release! They then 'fixed' it to throw the 'cannot serialize' exception.
They have finally really fixed it.( bug #440317 ) in case you can
access the history. Back in 2000, Tom Kyte reproduced it in 7.3.4,
8.0.3, 8.0.6 and 8.1.5.
Now my beef is with their implementation of XA and what data they
lock for in-doubt transactions (those that have done the prepare, but
have not yet gotten a commit). Oracle's over-simple logging/locking is
currently locking pages instead of rows! This is almost like Sybase's
fatal failure of page-level locking. There can be logically unrelated data
on those pages, that is blocked indefinitely from other equally
unrelated transactions until the in-doubt tx is resolved. Our TAR has
gotten a "We would have to completely rewrite our locking/logging to
fix this, so it's your fault" response. They insist that the customer
should know to configure their tables so there is only one datarow per
page.
So for historical and current reasons, I believe Oracle is absolutely
the dominant DBMS, and a winner in the market, but got there by being first,
sold well, and by being good enough. I wish there were more real market
competition, and user pressure. Then oracle and other DBMS vendors would
be quicker to make the product better.
Joe -
Change the level of isolation in a Informix connection
Post Author: mibarz
CA Forum: Data Integration
Iu2019m working over informix and I need to change de level of isolation befor make a Query.
The Informix instruction is SET ISOLATION TO DIRTY READ. In the Data flow we are using the SQL statement object for retrieve the data.
I try to change the level of isolation in the ODBC configuration, but its impossible.
¿Anybody can help me with this problem?
Thanks,
MartíPost Author: bhofmans
CA Forum: Data Integration
Unfortunately this is not possible in Data Integrator today. We have several enhancement requests for this and similar functionality for several RDBMS. For MSSQL server a workaround is provided via a DSConfig parameter, for other RDBMS we don't have a solution yet. -
Error while Changing log level in Agentry 6.0.44.1
Hi ,
I am trying to change the log level from the Agentry Administration client, but i am getting the below error message.
And it is showing any option to change the log levels for users. How can I change log level for users also.
How can I resolve this error.
Regards,
ShyamShyam,
This is fixed or planned fixed in 6.0.46 Agentry Server (supposed to be available anytime soon in the Service Marketplace - it was submitted already to the SMP team). The fix was in SMP 2.3 and SMP 3.0 but it was ported back to the Agentry 6.0.X release. If you have no access to the Agentry 6.0.X patches this means that your SAP License is preventing you from downloading it. You may need to contact the SAP CIC (customer interaction center) group.
Or you can do the manual setup for a workaround for now.
SAP KBA article: 2048202 - AgentryGUI does not allow to change log settings - Not all setttings were successfully changed.
Regards,
Mark Pe
SAP Senior Support Engineer (Mobility) -
How to set a dynamic validation message in javascript
Hi,
I am using the "validate" event on a field, along with the "script message" field, to make a validation and send a message to the user if the test fails.
- Is it possible to define parameters in this message, for example "field &1 is invalid" where we replace &1 by the name of a field?
- Is it possible to send 2 different messages (I guess it's like using a message &1)?
- What is the best practice according to your experience?
Notes:
- I am aware of the xfa.host.messageBox, but I'd like to keep Adobe logic for validations (am I wrong? why?)
- I also saw the possibility of binding a field from the context, to the message field, but I found that it was not very clean to do this way (if even possible)
Thx !According to the tests I did since yesterday, it is very difficult to use the "validation script message" (in the "value" tab of a "text input" field, within a dynamic table), for sending a dynamic message.
I abandon, and prefer to use use
xfa.host.messageBox( "dynamic message text" )
For information, I could change the message during "validate" event, with a rather complex algorithm.
Unfortunately, when a table row is just added (dynamically, with a button), though the message has been changed, it displays the original value. When I change the field again, the changed validation message is taken into account. I don't know why.
Edited by: Sandra Rossi on Jul 24, 2009 9:01 AM : it's only to say that since then, this was the only solution! Question closed -
Oracle Devs - "Customizing a Standard Validator Message" tutorial moved?
Guys and Gals,
Page 366 of the Oracle JDeveloper 11g Handbook: A Guide to Oracle Fusion Web Development references a "Customizing a Standard Validator Message" tutorial on java.sun.com.
It is nowhere to be found. Java.sun.com redirects to another oracle webpage, and the JDev tutorials do not seem to cover this material.
I'm not looking to add my own custom validator, but rather modify the default JSF error message. i.e. Change standard validator text from "Too many objects match the primary key oracle.jbo.key[<mypart>]" to "Part <mypart> already exists.".
Can anyone reference another tutorial for this topic?
Thanks in advance.HaHa! Finally found something similar. Sweet sauce.
http://netbeans.org/kb/docs/web/convert-validate.html#08 -
How to keep order of required validation messages while displaying.
Hi,
I need to keep required validation messages as in the order of components in the page. Now it not showing properly.
please let me know , if anybody have the solution.
Vipinhi user,
am not sure this ah best way.
but you may try this.
way organiszed in vo query and attributes in vo. rasinig those errors i think so.
so you may change the way of an organiszed vo. based on your need.
and am sure some one post or suggest better comments about this. -
Error at the time change factory calender (scal) validity period
Dear Experts
When I tried to change the factory calender Valid period with t code scal Fr. Year 1996 to Year 2011, it probe me the message "Please enter validity area between years 1995 - 2010." and I wasn't allow to save.
Pls advise how to solve the above problem.
Thanks.
BK GAIKWADhi
your issue is clear that the factory or plant cllendar is defined for only 5 years 2005 to 2010
NOTE: pleaase contact your HR or FICO consultants for this issue & dont change any thing in PRD then you will be responsible for it
hope this clears your issue.
balajia -
Prime Infrastructure 2.0 Link-Down Alarm severity level
Hi,
i have been looking for a way to lower the severity level of link-down Alarms. I use the threshhold utilization feature and want to generate an alarm when CPU or Memory goes over 90%, this works fine. Problem is that when you enable "Switches and Hubs" or "Routers" category to receive alarms under: Operate -> Alarms & Events -> Email Notifications, you define only Critical alarms should generate an email and Link-down is by default Critical.
Well you figure that you should be able to change this, but i can't find anything to change the severity under: Administration -> System Settings -> Severity Configuration.
Do anyone know if this is even possible? If so please tell me how..Hi,
Yes it is possible.
Changing Alarm Severities
You can change the severity level for newly generated alarms
Note Existing alarms remain unchanged.
To change the severity level of newly generated alarms:
Step 1 Choose Administration > System Settings .
Step 2 Choose Severity Configuration from the left sidebar menu.
Step 3 Select the check box of the alarm condition whose severity level you want to change.
Step 4 From the Configure Severity Level drop-down list, choose the new severity level ( Critical , Major , Minor , Warning , Informational , or Reset to Default ).
Step 5 Click Go, then click OK.
Check the attached screen shot ..
Thanks-
Afroz
***Ratings Encourages Contributors ***
Maybe you are looking for
-
New iTunes doesn't see all music in iTunes folder
I recently upgraded to iTunes 9.1.1.12. Since the upgrade, I notice that it only shows a portion of the files in the iTunes folder. I can see them when I open the folder in Windows Explorer. I haven't/don't receive an error message. How can I correct
-
Openmp Multithread programing in Sun Studio 12
Hello Folks, My name is Glauber, I'm begging in openmp programming. I'm doing some tests with Sun Studio 12 and c++ language, then I'd like to run a helloworld.cc code using 4 threads but I get only 1 thread. I've used "omp_set_num_threads(4)" inside
-
Currently working away from home and as luck would have it there is a weak Openzone signal down the road. If the atmospherics are good I can get onto this otherwise I need a signal repeater in one of my front windows. Question - will the signal repea
-
Hello All: I am trying to run a sample code from the Oracle8i doc 'Hello World'. The program compiled and used 'loadjava' to load to the database. When I tried to create SQL Call Function from SQL*Plus, I am getting the error: PLS-00103: Encountered
-
"This field name is not known" issue
Hi SAP TechSupport Team! I have unresolved issue related to "This field name is not known" error while running my report via ReportDocument's function Export(). Before that I correctly gave all required parameters. This report work fine if I am using