Where to put data validation & DB access in MVC designed app ?

Hi,
I write a stand alone app, because the user interface might required lots of changes in the future, I want to apply the MVC paradigm in the user interface design to improve maintainability.
Where should I put the routine to perform data validation and read/write database ?
Any advice would be greatly appreciated.
Setya

Like the tutorial says, you mustn't perform blocking or non-running operations in the GUI event thread.

Similar Messages

  • Where to put data processing routine when acquiring data using DAQmx

    I have a program that is aquiring data using the DAQmx Acquire N Samples mechanism with automatic reset and a data handler callback routine. DAQmx acquires N samples (usually 1024) from the board, calls the handler to do something with it, and then resets to get the next batch of data. The program acquires a number of lines of data, say 512 lines of N points each, with one callback call per line. Triggering is done by a hardware trigger at the start of each line of data. So far so good.
    The issue is that the time that it can spend in the callback is limited, or else the callback is not finished when the next batch of data is ready to be transferd from the DAQmx buffers and processed. There is a substantial amount of analysis to be done after the entire frame has been acquired, and it ends up taking far longer than the time between lines; so where to put the processing? The data acquisition is started from a control callback callback that exits back to the idle loop after it starts the data acquisition process, so there is no code waiting to execute, to return to, when the data acquisition is finished.
    I could try to put the data analysis routine into an idle-time routine and trigger it with a semaphore, or I could put it into a timer control callback with, say, a 10 millisecond repetition rate and poll a flag, setting the flag when all of the data has been acquired. Any suggestions would be appreciated.

    I would recommend using Thread Safe Queues. Your acquisition callback can place items in the TSQ and then you can process the data in a separate thread. TSQs are nice because they allow you to install a callback function to run for certain events. Most importantly, you can install a callback for the Items in Queue or Queue Size Changed event which will run the callback if a certain number of items are in the queue. This lets you take advantage of multithreading in a simple and safe way using a standard Producer/Consumer architecture. However, you may still run into problems with this architecture if your acquisition thread is running much faster than the consumer thread. You could eventually overflow the queue. In that case, your only options are to either get a faster system, slow down the acquisition or do the data handling post process.
    National Instruments
    Product Support Engineer

  • SMP 3.0 Agentry: Where to put data table files?

    Hi,
    we are using several file system data tables in our application which load their data from a file named "datatables.txt".
    In SMP 2.3 this file was placed inside the "tables" directory of the Agentry server. In SMP 3.0 we tried the com.sap.mobile.platform.server.agentry.application directory without success. What ist the right place for that file now?
    Thanks
    Christoph

    Christoph,
    Yes, I would recommend logging a support ticket related to this so it can be reviewed and hopefully migrated to the new application specific directory.
    --Bill

  • Where to put Validation Code?

    Up until now, Im still having second-thoughts of where to put validation code when setting attributes of an entity.
    Right now I have created lots of custom validators --(implement JbovalidatorInterface) that calls stored procedures to validate the value entered. But sometimes, i just use a ViewObject and query it on the setterAttribute method of the Entity and just manually throw a JboException of the value is invalid based on some business rule.
    Question is, what are the best practices where to put validation codes? do we have to be strict that we always put all validations on Validators or are we free to just throw JboExceptions anywhere on the BC classes' code.
    regards,
    Anton

    1. The reason I have a custom validator and I don't normally use the built in declarative validators is that the error message generated when the validation fails is fixed, only one message. I decided to have create a custom validator is that I need to test a one attribute for many cases in each case should produce a distinct error message. So if I use the built in validators, I would have to create lots of built in validators for that single attribute only. (and i have lots of entities and lots of attributes that needs business rule validation). So, I decided to create a custom validator, that calls the stored procedure, the stored procedure takes care of all test cases, for that attribute only, and I can return a dynamic error message depending on the test case that failed. What do you think about the approach?
    It's a little extra work to create a reusable validator class that will only be used once, but whether you do it that way or encapsulate the call in a helper class that your one-off method validator code delegates too, it seems similar to me. So it's more of a stylistic choice for you which you like better. Now, if your reusable validator were enable to encapsulate
    2. When I said anywhere; I meant inside the setterAttribute methods on the Entity and on the ViewRowImpl, orThe ViewImpl class or inside a method on an ApplicationModule?
    Rather than writing code in the setAttribute, I recommend using attribute-level method validators. This makes it more clear where all your validation code lives.
    I don't recommend performing validation in the view object level since entity objects are the component in the system that are tasked with handling validation. It would be easy to circumvent view level validation checks unless you make a lot of assumptions about exactly how your application is working.
    3. One other issue is that Validator methods are for validation purposes only. So its not a good idea to put in attribute setters to other attributes inside there. So you put the attribute setter logic outside of the validator usually inside the setAttribute() just after validator returns. But there are cases that is very straightfoward to put validation logic inside the setAttribute; meaning, inside the setAttribute() method, I test for a condition, if it fails, just throw a JboException, if its true, continue with the otherAttributes setter logic.
    Whether attribute setting of other attributes is performed in a setter method or in an attribute-level method validator, either way you will need conditional logic to avoid going into a validation "loop" (which eventually will throw an exception if after 10 attempts the object is still not valid at posting time.

  • Data from Microsoft Access ?

    Hi,
    When I deploy report that takes data from Microsoft Access, I got error: Unable to retreive object.
    How do we set the data source for this ? I tried to put the MS access file in the same folder as the published rpt file, it didn't work.
    Thanks

    There are a couple of issues here for you to look at.
    1.  How are you connecting to the database from your development machine?  If you're using an ODBC connection, that same connection must be set up on the computer where the report is deployed.  Where you're using ODBC or another connection method, your deployement computer will also need to have installed whatever drivers are necessary for connecting to Access (they aren't installed by default with Windows!)
    2.  If the database resides on the network (I know, you said it was in the same folder as the report, but you may not always be able to do that!) the user who is logged in to the machine where your app is deployed will have to have valid network access to the folder where the database is located.
    -Dell
    - A computer only does what you told it to, not what you <em>thought</em> you told it to!

  • Date Validation - a Nightmare for ME!

    Hi, I searched the forum and came up with some code that works for me partially. I am still having problems with getting the end result that is needed.
    I need to do a date validation where the begin date is less than end date. Below is the syntax..It works but when I put in the correct end date i still get the error message athough the date is correct. Also if I set the focus back to the field and correct the date it still set focus back... I'm sure it is something simple that I am missing
    ----- form1.subformpage1.empnsubform.DateTimeField2::exit - (JavaScript, client) -------------------
    if (DateTimeField2.formattedValue>=DateTimeField1.formattedValue);
    xfa.host.messageBox("Incorrect Date range");
    What am I missing? Yes I still struggle with writing scripts!!!!
    thanks

    I have a similar thing on one of my forms, the fields are "Date Submitted" and "Date Needed" and I need to validate that the Date Submitted date occurs before the Date Needed date.  If it does not, it prompts a response dialog box and asks for a new DateNeed to be entered.  Here is the code I used: (I'm by no means an expert at code)<br /><br />//This just sets the values of the date/time fields to variables, and then checks for a null value.  If null, it changes the rawValue to an empty string for inserting into database.  If not null, it leaves the existing rawValue unchanged. Unless you're writing info to a database, you probably wouldn't need this.<br /> <br />var DateSubmit = form1.MainForm.Info.DateSubmitted.rawValue == null ? "" : form1.MainForm.Info.DateSubmitted.rawValue;<br />var DateNeed = form1.MainForm.Info.DateNeeded.rawValue == null ? "" : form1.MainForm.Info.DateNeeded.rawValue;<br /><br />if (DateNeed<DateSubmit)<br /><br />{<br /><br />     var dateResponse = xfa.host.response("The Date Needed date must be later than the Date Submitted date.\nPlease enter new date below: (MM/DD/YYYY format)", "Date Needed Error");<br /><br />     var myDate = new Date(dateResponse);<br /><br />     var myFormattedDate = util.printd("dd mmm yyyy", myDate);<br /><br />     form1.MainForm.Info.DateNeeded.formattedValue = myFormattedDate;<br /><br />}<br /><br />BTW, I have this as code on a submit button that does all of my validations and then writes a new record to a database.  But I think you could also do this on the exit event of the second date/time field if needed. The variable declarations at the top would be slightly different.<br /><br />Lynne

  • HT4796 after transfering data froAm my pc to mac using windows migration assist, where do the data go to on my mac please?

    after transfering data froAm my pc to mac using windows migration assist, where do the data go to on my mac please?

    Welcome to the Apple Support Communities
    By default, Migration Assistant creates a new user account where it puts all the transferred data. Just go to  > Log Out, and log in the new user that Migration Assistant has created, so you will be able to access to the transferred data.
    If you want to merge the data into the first user account, use the Shared folder (it's in /Users/Shared) to put the transferred files there, and then, on the first user, go to the Shared folder and put all the data in the folders you want

  • Where to put EJB design patterns?

    Im considering where to put the EJB design pattern classes and files.
    i.e. Facade, Data Access Objects, Transfer Objects, Delegate...
    I was thinking since the EJB business logics and implementation should be hidden from the user, then it is more logical to place these classes at the webapp.war inside of the ejb.jar.
    What are your thoughts guys?

    Photoshop patterns and normally stored in sets the set files have an extension of .pat.  You may also add individual patterns using menu Edit>Define Pattern.  In some Photoshop Pattern dialog you will see a little gear icon to open a Fly-Out menu in that menu there is a entry for Photoshop Preset Manager.  The are also other ways to get into the Preset manager.   In the preset manager you can manage many types of presets. Using its pull-down menu select patterns.  You can Load an save patterns sets.  To save a set tou heed to hilifht the paterns yoy want to include in the set.  You can also delete and rename patterns.

  • Where to put java code - Best Practice

    Hello. I am working with the Jdeveloper 11.2.2. I am trying to figure out the best practice for where to put code. After reviewing http://docs.oracle.com/cd/E26098_01/web.1112/e16182.pdf it seemed like the application module was the preferred spot (although many of the examples in the pdf are in main methods). After coding a while though, I noticed that there were quite a few libraries imported, and wondered whether this would impact performance.
    I reviewed postings on the forum, especially Re: Access service method (client interface) programmatically . This link mentions accessing code from a backing bean -- and the gist of the recommendations seems to be to use the data control to drag it to the JSF, or use the bindings to access code.
    My interest lies in where to put java code in the first place; In the View Object, Entity Object, and Am object, backing bean.....other?
    I can outline several best guesses about where to put code and the pros and cons:
    1. In the application module
    Pros: Centralized location for code makes development and support more simple as there are not multiple access points. Much like a data control centralizes services, the application module can act as a conduit for different pieces of code you have in objects in your model.
    Cons: Everything in one place means the application module becomes bloated. I am not sure how memory works in java -- if the app module has tons of different libraries are they all called when even a simple query re-execute method is called? Memory hog?
    2. Write code in the objects it affects. If you are writing code that accesses a view object, write it in a view object. Then make it visible to the client.
    pros: The code is accessed via fewer conduits (for example, I would expect that if you call the application module from a JSF backing bean, then the application module calls the view object, you have three different pieces of code --
    conts: The code gets spread out, harder to locate etc.
    I would greatly appreciate your thoughts on the matter.
    Regards,
    Stuart
    Edited by: Stuart Fleming on May 20, 2012 5:25 AM
    Edited by: Stuart Fleming on May 20, 2012 5:27 AM

    First point here is when you say "where to put the java code" and you're referring to ADF BC, the point is you put "business logic java code" in the ADF Business Components. It's fine of course to have Java code in the ViewController layer that deals with the UI layer. Just don't put business logic in the UI layer, and don't put UI logic in the model layer. In your 2 examples you seem to be considering the ADF BC layer only, so I'll assume you mean business logic java code only.
    Meanwhile I'm not keen on the term best practice as people follow best practices without thinking, typically best practices come with conditions and people forget to apply them. Luckily you're not doing that here as you've thought through the pros and cons of each (nice work).
    Anyway, back on topic and off my soap box, as for where to put your code, my thoughts:
    1) If you only have 1 or 2 methods put it in the AppModuleImpl
    2) If you have hundreds of methods, or there's a chance #1 above will morph into #2, split the code up between the AppModuleImpl, ViewImpl and ViewRowImpls. Why? Because your AM will become overloaded with hundreds of methods making it unreadable. Instead put the code where it should logically go. Methods that work on a specific VO row go into the associated ViewRowImpl, methods that work across rows in a VO go into the ViewImpl, and methods that work across VOs in the associated AppModuleImpl.
    To be honest which you ever option you choose, one thing I do recommend as a best practice is be consistent and document the standard so your other programmers know.
    Btw there isn't an issue about loading lots of libraries/imports into a class, it has no runtime cost. However if your methods require lots of class variables, then yes this will have a memory cost.
    On a side note if you're interested in more ideas around how to build ADF apps correctly think about joining the "ADF EMG", a free online forum which discusses ADF architecture, best practices (cough), deployment architectures and more.
    Regards,
    CM.

  • Securing data from dba access , like Credit Card Details

    Hello ,
    is there any way of hiding CC details from all users in db level except specifc users
    enrypting cc data like oracle hashed passwords
    for ex,
    case (1)
    user 1 ( has access to these details )
    select acc#,customer_name from cc_details
    output : it will show all the details decrypted
    case (2)
    user 2 : ( doesnt have access )
    select acc#,customer_name from cc_details
    output : it will show all the details encrypted
    both in db level , like using sqlplus or toad
    any idea!
    thanks and regards,

    Hi, Peter,
    You wrote:
    Can you please document the problems you mention for Patch Sets/ CPU?
    What are the vulnerabilities? Search Alex's Web site but didn't find anything in regards to >DBVault.I've told about these
    http://dms.aladdin.ru/file.php?id=d7eb03f7f47ec3c68f4b1f1fe3317119
    http://dms.aladdin.ru/file.php?id=88cf1d7a962eddf7e57e2447d1e5b207
    and may be this
    http://dms.aladdin.ru/file.php?id=232eb8ed58d04295bb3920dbe805358d
    (Note: The link will be valid until 26 Jun 2008 GMT).
    In reg's to reading data from datafile, that's where TDE comes into the picture; then no-one can read from data file directly.
    There is no user who owns TDE; TDE is enabled on a database-wide level. So the >normal data owner (who is the only who should have full access to his own data with >DBVault) can use TDE to encrypt; no extra privileges needed.I’ve told about the user who is the owner of the database wallet (usually SYS). He can temporary disable encryption, takes the data, then restore encryption.
    DBVault and TDE should be the perfect match for 'securing data from dba access , like >Credit Card Details'In other words we have yet another administrator (DV owner) instead of the good old SYS :)
    And I have a question: in case the protection with DV of some tables was made from the SYS, can he make (in example) full backup or full export of the data (his ordinary administrative tasks)? If yes, then it isn't protection, if no, then...what?
    The solution is somewhere else, I think

  • I can't see the adjustments under the Adjustment tab in Aperture. Where is the data?

    Several months ago I dragged photos from Aperture into a Burn Folder on my Desktop. Today I dragged the photos from the Burn Folder on my Desktop into Aperture. My photos are all still edited, but I can't see any of the adjustments I made under the Adjustment Tab in Aperture. Where is that data? How do I get it back?

    I'll cover most of this because I'm too busy to trim it, but be sure to read Léonie's post.
    barbst wrote:
    Can you explain to me the difference between a Preview and the Master image? I checked my preferences and they are set "Photo Preview: Half Size" and "Photo Preview quality: 8 Medium".
    The Master is now known as the Original.  (Apple changed the term.  I don't know why.)  The "Original" is the term Aperture uses to designate the image-format file you imported into your Library.  When you import, Aperture creates a record in its database and creates an Image that you see in the Browser and the Viewer.  Every Image has an Original.
    Aperture creates additional files that it uses.  Among these are a Version, which is a text file containing instructions regarding adjustments and metadata changes you make, some thumbnails, which are image-format files in small dimensions, and a Preview, which is a JPG-format file that is automatically updated to include any adjustments you have made.
    The Original is _never_ altered.  Aperture is a _non-destructive_ photo developing workshop.  "Non-destructive" means that your original files (the ones that Aperture calls "Originals" once you import them) are never altered.
    The Preview is _always_ altered.  It always is updated to include all the adjustments you have made to any Image.
    As you know, you set the JPG parameters (size and quality) for your Previews in Aperture's preferences.
    Does this mean that all the photos I have dragged to Burn Folders and burned to DVDs are lesser quality than my Masters in my Aperture Library? I thought I was backing up the Masters!
    Sorry for the bad news, but the latter is not correct, and the former is.  When you drag an Image from the Browser or the Viewer, Aperture creates a copy of that Image's Preview.  (The User Manual should make this clear.  Currently it is incorrect.)
    Note, too, that your Masters (now "Originals") are never altered.  What you might have wanted to back up were the Images in your Library.  Now -- get ready for the weird part -- those Images do not exist as image-format files.  There is no file that we can access in order to to back-up an Image.  You can either create share-able image-format files from your Images, and back those up, or you can back up your Aperture Library, which, when restored, will give you the opportunity to create share-able image-format files from your Images.
    Aperture is structured this way -- it saves the imported file, untouched, and text instructions telling it how to create your Images on-the-fly -- specifically to save the space you would need to store full-size files of every Image in your Library.  Rather than save "the thing itself", it saves just what it needs to create "the thing itself".  The storage savings are enormous.
    This is why your workflow is (in general): import, make adjustments and add metadata, sort, group, and organize, and export _only when you need an image-format file to share with another program_.  You then share that file, and delete it (because if you need another, you can create it by exporting from Aperture).
    How about when I drag photos from my Aperture library to an external hard drive icon on my desktop? Am I dragging the Masters or Previews? If so, are the Previews lesser quality than the Masters in my Aperture Library?
    When you drag Images from Aperture, you create copies of those Images' Previews.  If you haven't made any adjustments, then they are Previews of your Masters (now called "Originals").  But Previews are JPGs with the dimensions and quality you told Aperture to use, whereas your Originals are whatever format, dimensions, and quality they were when you imported those files.  The quality of an Image's Preview is always less than the quality of that Image's Original.  (I'm using "quality" to mean "amount of useable data".)
    You should test this right now.  Create a new Finder folder.  In Aperture, select an Image with adjustments applied.  Drag it into your newly-created Finder folder.  Then export it's Original into the Finder folder (File ▹ Export ▹ Original).  Then export the Version with whatever settings you want (File ▹ Export ▹ Version).  Compare the three files you just created.
    I think I'm going to be sick! Please tell me it isn't so! I have 1000s of beautiful photos from trips on both DVDs and a Western Digital external hard drive, but I dragged them all from my Aperture Library to either a Burn Folder or a Western Digital icon on my desktop, then I deleted the originals from my Aperture Library. I did this to make room on my Mac Book Pro hard drive. This is how Apple One on One taught me to backup my photos!
    I'm sorry  .
    To back up your Originals, back them up.  They are files.  If they are in your Library, they get backed up when you back up your Library.
    To back up your Images, either make files from them, and back up those files, or ... back up your Library with the knowledge that that way you can create new files from your Images after restoring your Library from your back-up.
    To shrink the size of your Library so that it leaves more space on your system drive:
    - remove un-needed Images
    - relocate some Images' Originals to a second drive (almost always external).  Aperture makes this easy to do.
    Alternative, put your entire Library on a second drive.  Libraries on external drives connected via USB-3 or Thunderbolt work fine.
    In general, a photographer's commitment to an image-management database system (of which Aperture is one) is complete:  it works best when it is all-enclusive.  I recommend against removing Images from your Library either to save space or to archive them.  My personal Library catalogues all my photographs.  Using it I can find, and make available, any photo I've taken and decided to keep.  Once I decide to keep it, it stays -- forever -- as a record in my Library.
    Note that your "Library Set" (I just made that up, but it should exist) includes your Library package (that shows as a file in Finder) and any Referenced Originals that belong to Images in your Library.  You must back up your entire Library Set.
    The Geniuses at the Apple Store generally do great jobs in demanding circumstances.  They are, by need, generalists who must be reasonably adept with a lot of software.  It seems you were (badly) misinformed.  Please recommend this post to the one who coached you, so that s/he makes sure that others aren't similarly misled.
    HTH,
    --Kirby.

  • Where to put my Airdisk/Drobo/Server/Whatever

    I am looking for advice in optimizing what I want to do with my external storage to get the most out of... uh, well, out of life. Or something.
    In the basement I have a Mac Mini connected to my TV that acts as my Tivo. It uses EyeTV with an external drive connected via Firewire (400). Works great.
    Two floors up I have an iMac that acts as my central computer. It is connected, via ethernet cable, to an Airport Extreme base station. The base station is connected to my DSL modem via ethernet cable and to another external drive via USB 2.0. Works pretty well...
    So that all laptops and the Mac Mini can access the media, my iTunes library is on the upstairs external drive (the one connected to the Airport). Generally no issue there, but lately I've noticed that movies and shows played from that drive might get jumpy when being watched on TV. I guess that may be because the media goes from the drive through the Airport, through two floors, then into the Mac Mini.
    Or maybe it's because the drive was getting ready to fail (it just did).
    So now I'm going to get a Drobo so that I never go through replacing a failed drive (and trying to recover it!) again. This is drive #3 for me in 5 years. (No more non-redundancy!) But the question is where to put it.
    Should I move the Airport and modem to the basement and connect the Drobo to the Airport? Should I just put it upstairs and chalk the slowness up to a failing drive? Should I connect the Drobo directly to the Mac Mini and share the drive (and will that lead to a new can of worms with the Airport)? And what cable connections should I use? I just learned that the USB bus pretty much handles everything through one place no matter how many ports you have, so I'm guessing I want to limit that...? But I only have 1 Firewire connection on the Mini...
    What do you guys think? And if you think there's a better forum to post this message in, let me know.
    Thanks in advance,
    Rick

    Hey CPC Guy,
    If at all possible, I'd find a way to set up your modem, router and its connected USB external drives somewhere centrally on the center floor. Then just connect wirelessly to it with the iMac.
    One cheap storage solution that I've used to this point is to buy a 7-port USB hub, connect it to the Airport, and then add multiple external USB drives. You can also run a central USB printer from there as well as a ethernet connected network printer if you wanted. I've run something like this for a couple of years. All on a one level home but centralized in it nonetheless. Works great. Wireless speeds are great.
    Some of these drives can serve as backup drives for the others, including your computers. I've used SuperDuper to backup over the network. A great source for dual drive enclosures is OWC (www.MacSales.com).
    Another solution that I'd recommend over the drobo is an HP Media Smart Windows Home Server. Similar to the drobo, but I think it is a much better, more flexible, cost effective solution. In fact, I'm installing one as I write this. Bought it for $500 with free shipping from Newegg.com. It comes with a 750GB SATA and has three more upgradeable bays. I got the EX485. There is a EX487 and a new smaller one that would work as well. You might compare this with what the drobo has to offer.
    There are supposed to have been some conflict with installing via a connected Airport Extreme, but I have not had that problem. The one caveat is that you have to initialize with a windows PC first. I run XP and Vista via VM Fusion so that isn't a problem. Then you can use it fine with a mac.
    Also, a friend who does data recovery for a living and is a die hard mac guy swears by Seagate (1) Samsung (2) and Hitachi (3). The highest rate of failures he has seen for many years is WD. I buy these three and I have been fortunate to not have one fail yet. Knock on wood. YMMV.
    If you're looking to expand specifically around the mini in the basement, you might try the USB hub approach mentioned above. However, I think if you can centralize all your media on the middle level then streaming distances are much shorter and backups are much easier and faster over cabled connections. I just enclose it all in a small footprint media cabinet.
    Ed

  • Arabic data validation

    Hi,
    I have a custom infotype where user inputs data in Arabic text (eg. name, address etc everything in Arabic). All the field labels are in Arabic language, so it is intuitive for the user to enter arabic input in those fields. But to ensure data consistency, and as part of data validation I want to put checks to ensure that these fields doesn't accept anything but Arabic text only.
    Any suggestions on how to put this validation would be highly appreciated!!!

    I applied your CF formula with the named range and it works fine in 2013, same CF results with the address in full or with the Name (and I'm sure it would in other versions).
    How did you apply it, ie to which range and what did you pre-select, there can be issues with relative addressing. 
    In passing are you sure you need to refer to such a large range, it could make recalc slow.

  • How to get and put data in HTTP header in Client Side

    One JSP's function is to display data records page by page. Client users could click "First page" or "Previous Page" or "Next Page" or "Last Page" button to let the JSP to show the corresponding page. So the JSP must remember the current page number. Due to there are a number of client users maybe access the JSP at the same time, keep the current page number in session, there will be a lot of session is created at the same time, this will impact system's performance. So using session to keep data method is not used. I plan to use request header and response header to pass the current page number. I know use the response.addHeader to put data in response header and also know use request.getHeader to get data in request header in server side, but I donot know how to put data in request header and how to get data in response header in client side. Could you please give me a help? If donot use these method, are there any other method? Thanks a lot.

    Why not pass it as a parameter with the URL?
    at the beginning of the page..
    <%
    int pageNumber = Integer.parseInt(request.getParameter("page"));
    %>
    then when defining your links
    <a href="thisPage.jsp?page=<%= (pageNumber-1)%>">Previous</a>
    <a href="thisPage.jsp?page=<%= (pageNumber+1)%>">Next</a>

  • List data validation failed when creating a new list item but does not fail when editing an existing item

    Dear SharePoint Experts,
    Please help.
    Why does my simple formula work in Excel but not-work in SharePoint?
    Why does this formula...
    =IF([Request Type]="Review",(IF(ISBLANK([Request Date]),FALSE,TRUE)),TRUE)
    ...work in Excel but fail when I try to use it in SharePoint?
    The intent of this formula is the following...
    If the field "Request Type" has the value "Review" and the field "Request Data" is blank then show FALSE, otherwise show TRUE.
    SharePoint saves the formula, but when a list item is saved where the formula is implemented, (under List Settings, List Validation), SharePoint does not, say anything other than that the formula failed.
    Note that the "list data validation failed" error only happens when I am creating a new item-- the formula above works just fine when one is trying to Save on the edit form. 
    Can you help?
    Thanks.
    -- Mark Kamoski

    Dear Jason,
    I appreciate your efforts.
    However, it seems to me that this statement of yours is not correct...
    "If it meet the validation formula, then you can new or edit the item, otherwise, it will throw the 'list data validation failed' error, it is by design".
    I believe this is NOT the answer for the following reasons.
    When I create a new item and click Save, the validation error is "list data validation failed".
    When I edit an existing item and click Save, the validation error is "my custom error message" and this is, I believe, the way it needs to work each time.
    I think, at the core, the error my formula does not handle some condition of null or blank or other default value.
    I tried a forumla that casts the date back to a string, and then checked the string for a default value, but that did not work.
    I tried looking up the Correlation ID in the ULS when "list data validation failed" occurs, but that gave no useful information because, even though logging was set to Verbose, the stack trace in the error log was truncated and did not given any
    good details.
    However, it seems to me that SharePoint 2013 is not well-suited for complex validation rules, because...
    SharePoint 2013 list-level validation (NOT column-level validation) allows only 1 input for all the multi-field validation formulas in a given list-- so, if I had more than 1 multi-field validation rule to implement on a given list, it would need to be packed
    into that single-line-of-code forumla style, like Excel does. That is not practice to write, debug, or maintain.
    SharePoint 2013 list-level validation only allows 1 block of text for all such multi-field validation rules. So that will not work because I would have something like "Validation failed for one or more of the following reasons-- withdrawal cannot exceed
    available balance, date-of-birth cannot be after date-of-death,... etc". That will not work for me.
    The real and awesome solution would simply be enhancing SP 2013 so that column-level validation forumlas are able to reference other columns.
    But, for now, my workaround solution is to use JavaScript and jQuery, hook the onclick handler on the Save button, and that works good. The only problem, is that the jQuery validation rules run before any of the column-level rules created  with OOTB
    SP 2013. So, in some cases, there is an extra click for the enduser.
    Thanks,
    Mark Kamoski
    -- Mark Kamoski

Maybe you are looking for

  • How can I create UI components dynamically based on the result of WebService/HttpService call?

    I would like to create child components of the component based on a XML which is retrieved by WebService/HttpService call. createChildren() is the one to be used to create components dynamically. But if I use createChildren() and call a WS/HttpServic

  • HDTV OTA question

    I have been forced to make cutbacks and go without a cable provider. I have an antenna in my attic that I used to use before the digital conversion that got great reception, but my new HDTV does not have a tuner built in. I want a tuner box that woul

  • Examples needed for Idoc to file and IDOC to web services

    Hi , Could any one of you give some examples which take me through step-by-step in building IDOC-TO-FILE and IDOC-TO-WEB SERVICES? Regards, XI Developer.

  • Soundtrack Pro 2.02 will not open

    I purchased Logic Studio as soon as it was available. When the update came out I loaded it just fine. Logic Studio works perfectly. Soundtrack Pro will not open at all. I clicked on the alias in the dock, the icon bounced one, and then just sat there

  • Photoshop Trial download too slow

    Hi everyone, Is it normal for the Photoshop Trial download to take forever? Apart from the text to say Photoshop is now downloading (which has been there for about 20 minutes) there appears to be nothing happening. Am I missing something? Thank you!