Question Regarding a Java Search Script

Hi Folks, Ive just signed up to see if you can help me in my delima.
Im trying to implement a search engine into a friends website, as she does not have the experence to do it herself, I have the search engine set up and running but i didnt realise before i started (due to my habit of not reading instructions) that keywords have to be entered manually. And so this is not practical for this particular site, as it has around 50 ish pages (not including linked word or excel files, but these dont need to be search)
Is there a way to crawl the website and generate a keyword file. I have found some online free crawlers (spiders) but the site is only accessable to public with a password, as its for a school teachers. I have to go but ill be back, any help would be much accepted. Thanks you all in advance.
Regards Diablo2nd

Thanks for the Reply/tip!

Similar Messages

  • 3 questions regarding duplicate script

    3 questions regarding duplicate script
    Here is my script for copying folders from one Mac to another Mac via Ethernet:
    (This is not meant as a backup, just to automatically distribute files to the other Mac.
    For backup I'm using Time Machine.)
    cop2drop("Macintosh HD:Users:home:Desktop", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Documents", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Pictures", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Sites", "zome's Public Folder:Drop Box:")
    on cop2drop(sourceFolder, destFolder)
    tell application "Finder"
    duplicate every file of folder sourceFolder to folder destFolder
    duplicate every folder of folder sourceFolder to folder destFolder
    end tell
    end cop2drop
    1. One problem I haven't sorted out yet: How can I modify this script so that
    all source folders (incl. their files and sub-folders) get copied
    as correspondent destination folders (same names) under the Drop Box?
    (At the moment the files and sub-folder arrive directly in the Drop Box
    and mix with the other destination files and sub-folders.)
    2. Everytime before a duplicate starts, I have to confirm this message:
    "You can put items into "Drop Box", but you won't be able to see them. Do you want to continue?"
    How can I avoid or override this message? (This script shall run in the night,
    when no one is near the computer to press OK again and again.)
    3. A few minutes after the script starts running I get:
    "AppleScript Error - Finder got an error: AppleEvent timed out."
    How can I stop this?
    Thanks in advance for your help!

    Hello
    In addition to what red_menace has said...
    1) I think you may still use System Events 'duplicate' command if you wish.
    Something like SCRIPT1a below. (Handler is modified so that it requires only one parameter.)
    *Note that the 'duplicate' command of Finder and System Events duplicates the source into the destination. E.g. A statement 'duplicate folder "A:B:C:" to folder "D:E:F:"' will result in the duplicated folder "D:E:F:C:".
    --SCRIPT1a
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    with timeout of 36000 seconds
    tell application "System Events"
    duplicate folder sourceFolder to folder destFolder
    end tell
    end timeout
    end cop2drop
    --END OF SCRIPT1a
    2) I don't know the said error -8068 thrown by Finder. It's likely a Finder's private error code which is not listed in any of public headers. And if it is Finder thing, you may or may not see different error, which would be more helpful, when using System Events to copy things into Public Folder. Also you may create a normal folder, e.g. named 'Duplicate' in Public Folder and use it as desination.
    3) If you use rsync(1) and want to preserve extended attributes, resource forks and ACLs, you need to use -E option. So at least 'rsync -aE' would be required. And I rememeber the looong thread failed to tame rsync for your backup project...
    4) As for how to get POSIX path of file/folder in AppleScript, there're different ways.
    Strictly speaking, POSIX path is a property of alias object. So the code to get POSIX path of a folder whose HFS path is 'Macintosh HD:Users:home:Documents:' would be :
    POSIX path of ("Macintosh HD:Users:home:Documents:" as alias)
    POSIX path of ("Macintosh HD:Users:home:Documents" as alias)
    --> /Users/home/Documents/
    The first one is the cleanest code because HFS path of directory is supposed to end with ":". The second one also works because 'as alias' coercion will detect whether the specified node is file or directory and return a proper alias object.
    And as for the code :
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    It is to strip the trailing '/' from POSIX path of directory and get '/Users/home/Documents', for example. I do this because in shell commands, trailing '/' of directory path is not required and indeed if it's present, it makes certain command behave differently.
    E.g.
    Provided /a/b/c and /d/e/f are both directory, cp /a/b/c /d/e/f will copy the source directory into the destination directory while cp /a/b/c/ /d/e/f will copy the contents of the source directory into the destination directory.
    The rsync(1) behaves in the same manner as cp(1) regarding the trailing '/' of source directory.
    The ditto(1) and cp(1) behave differently for the same arguments, i.e., ditto /a/b/c /d/e/f will copy the contents of the source directory into the destination directory.
    5) In case, here are revised versions of previous SCRIPT2 and SCRIPT3, which require only one parameter. It will also append any error output to file named 'crop2dropError.txt' on current user's desktop.
    *These commands with the current options will preserve extended attributes, resource forks and ACLs when run under 10.5 or later.
    --SCRIPT2a - using cp(1)
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    set dst to (destFolder as alias)'s POSIX Path's text 1 thru -2
    set sh to "cp -pR " & quoted form of src & " " & quoted form of dst
    do shell script (sh & " 2>>~/Desktop/cop2dropError.txt")
    end cop2drop
    --END OF SCRIPT2a
    --SCRIPT3a - using ditto(1)
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    set dst to (destFolder as alias)'s POSIX Path's text 1 thru -2
    set sh to "src=" & quoted form of src & ";dst=" & quoted form of dst & ¬
    ";ditto "${src}" "${dst}/${src##*/}""
    do shell script (sh & " 2>>~/Desktop/cop2dropError.txt")
    end cop2drop
    --END OF SCRIPT3a
    Good luck,
    H
    Message was edited by: Hiroto (fixed typo)

  • A question regarding Java project in ECLIPSE IDE

    A question regarding eclipse.
    No offence meant.
    I have added a jar file to the current project in its build path.
    This executes the concerned class file which is in the jar file.
    If I remove the jar file from the build path and run the project,
    it still executes that class file.
    I tried building a clean project and running it,but still executes the jar file and the class.
    Any idea why this is happening?

    A question regarding eclipse.
    No offence meant.People are not usually offended by Eclipse.
    I have added a jar file to the current project in its
    build path.
    This executes the concerned class file which is in
    the jar file.
    If I remove the jar file from the build path and run
    the project,
    it still executes that class file.
    I tried building a clean project and running it,but
    still executes the jar file and the class.
    Any idea why this is happening?You have that class file elsewhere
    or you have confused your problem statement.
    Why don't you create a new project and add everything except that class/jar (whichever yu mean)

  • Question regarding JRockit

    Hi All,
    Question regarding JRockit.
    I do not know this software nor am I a java developper.
    What I would like to do is monitor JVM running inside OC4J in Oracle e-Business Suite.
    Could this tool be use to do so?
    Also, I think want I really need is JRockit Misson Control. Can I simply install JRMC or do I need to install JRockit then MC?
    Thanks to share some lights :)

    I am not sure if JRockit is ceritified with OC4J on EBS, but I seriously doubt it.
    You could look into a utility called jvisualvm which resides in $JAVA_HOME/bin
    Documented here: http://docs.oracle.com/javase/6/docs/technotes/guides/visualvm/index.html

  • Question regarding Inrefaces:Please GUIDE.

    A question regarding INTERFACES.
    'Each interface definition constitutes a new type.
    As a result, a reference to any object instantiated from any class
    that implements a given interface can be treated as the type of
    the interface'.
    So :
    interface I{}
    class A implements I{
    }Now,class A is of type I.Right?
    Now,if class A implements more than one interface,then what
    is the actual type of A?
    For example:
    interface I{}
    interface R{}
    class B implements I,R{
    }What is now B's type? I or R? or both?

    >
    The class (that implements the interface) actually
    defines the behavior, and the interface just serves as
    a contract for that behaviorYes.
    - a view.Call it that if you want, but it being "a view" doesn't take away is-a-ness.
    IMHO, the 'types' are the classes, which qualify for
    the 'is a' relationshipAs yawmark points out, your use of "type" is not consistent with the JLS. Regardless of how you want to define type, the face it that it makes sense to say "A LinkedList is a List" and "A String is (a) Comparable" etc. Additionally, the way I've always seen the is-a relationship described, and the way that makes the most sense to me, is that "A is-a B" means "A can be used where B is expected." In this respect, superclasses and implemented interfaces are no different.
    (which is what the words "extends" and "implements"
    strongly suggest)"Foo extends Bar" in plain English doesn't suggest to me that Foo is a Bar, but quite clearly, in the context of Java's OO model, it means precisely that.
    "Foo implements Bar" in plain English doesn't suggest much to me. Maybe that Foo provides the implementation specified in Bar, and therefore can be used where a Bar is required, which is exactly what implements means in Java and which, as far as I can tell, is the core of what the is-a relationship is supposed to be about in general OO.

  • Question regarding ExternalizableLite

    One question regarding extending ExternalizableLite
    If I have a class A which extends ExternalizableLite as
    class A {
    int index;
    B objectB;
    I'd need to use the ExternalizableHelper's writeObject and readObject when deal with objectB member.
    If I also make class B extends ExternalizableLite, will that help on serialize/deserialize class A? Or it not going to help at all?
    Regards,
    Chen

    Hi Chen,
    if B is an ExternalizableLite implementor, then ExternalizableHelper.writeObject() will delegate to ExternalizableHelper.writeExternalizableLite(). I.e., it will write out a byte indicating that it is an externalizable lite, then writes out the classname and then delegates to your class B's writeExternal method. The improvement is the difference between the class B writeExternal method implementation and the Java serialization of the same class.
    If you know the type of the objectB member without writing the class name to the stream (e.g. if class B is final), then you can improve further by not delegating to writeObject, you can instead directly delegate to your class B's writeExternal method. Upon deserialization, you can instantiate the objectB member on your own, and delegate to the readExternal method of the newly instantiated objectB member.
    This allows you to spare the class name plus one byte in the serialized form of your A class.
    Best regards,
    Robert

  • Questions regarding Disk I/O

    Hey there, I have some questions regarding disk i/o and I'm fairly new to Java.
    I've got an organized 500MB file and a table like structure (represented by an array) that tells me sections (bytes) within the file. With this I'm currently retrieving blocks of data using the following approach:
    // Assume id is just some arbitary int that represents an identifier.
    String f = "/scratch/torum/collection.jdx";
    int startByte = bytemap[id-1];
    int endByte = bytemap[id];
    try {
              FileInputStream stream = new FileInputStream(f);
              DataInputStream in = new DataInputStream(stream);
                    in.skipBytes(startByte);
              int position = collectionSize - in.available();
              // Keep looping until the end of the block.
              while(position <= endByte) {
                  line  = in.readLine();
                  // some pocessing here
                  String[]entry = line.split(" ");
                  String docid = entry[1];
                  int tf = Integer.parseInt(entry[2]);
                  // update the current position within the file.
                  position = collectionSize - in.available();
       } catch(IOException e) {
              e.printStackTrace();
       }This code does EXACTLY what I want it to do but with one complication. It isn't fast enough. I see that using BufferedReader is the choice after reading:
    http://java.sun.com/developer/technicalArticles/Programming/PerfTuning/
    I would love to use this Class but BufferedReader doesn't have the function, "skipBytes(), which is vital to achieve what I'm trying to do. I'm also aware that I shouldn't really be using the readLine() function of the DataInputStream Class.
    So could anyone suggest improvements to this code?
    Thanks
    null

    Okay I've got some results and turns out DataInputStream is faster...
    EDIT: I was wrong. RandomAccessFile becomes a bit faster according to my test code when the block size to read is large.
    So I guess I could write two routines in my program, RAF for when the block size is larger than an arbitary value and FileInputStream for small blocks.
    Here is the code:
    public void useRandomAccess() {
         String line = "";
         long start = 1385592, end = 1489808;
         try {
             RandomAccessFile in = new RandomAccessFile(f, "r");
             in.seek(start);
             while(start <= end) {     
              line = in.readLine();     
              String[]entry = line.split(" ");
              String docid = entry[1];
              int tf = Integer.parseInt(entry[2]);
              start = in.getFilePointer();
         } catch(FileNotFoundException e) {
             e.printStackTrace();
         } catch(IOException ioe) {
             ioe.printStackTrace();
    public void inputStream() {
         String line = "";
         int startByte = 1385592, endByte = 1489808;
         try {
             FileInputStream stream = new FileInputStream(f);
             DataInputStream in = new DataInputStream(stream);
             in.skipBytes(startByte);
             int position = collectionSize - in.available();
             while(position <= endByte) {
              line  = in.readLine();
              String[]entry = line.split(" ");
              String docid = entry[1];
              int tf = Integer.parseInt(entry[2]);
              position = collectionSize - in.available();
         } catch(IOException e) {
             e.printStackTrace();
        }and the main looks like this:
       public static void main(String[]args) {
         DiskTest dt = new DiskTest();
         long start = 0;
         long end = 0;
         start = System.currentTimeMillis();
         dt.useRandomAccess();
         end = System.currentTimeMillis();
         System.out.println("Random: "+(end-start)+"ms");
         start = System.currentTimeMillis();
         dt.inputStream();
         end = System.currentTimeMillis();
         System.out.println("Stream: "+(end-start)+"ms");
        }The result:
    Random: 345ms
    Stream: 235ms
    Hmmm not the kind of result I was hoping for... or is it something I've done wrong?

  • LVM questions regarding performance

    Over the holidays, I'll be redoing my server (running Arch) using LVM.  The situation is this: I have two internal HDDs (80GB apiece) on the IDE bus, and an external USB2.0 drive sitting around not being used. The current install is also partitioned in such a way that there are some partitions that are nearly empty. So naturally I want to put things into a big volume group so that I'm not wasting space, full well knowing that there may be a slight performance hit (that I can justifiy).
    I'll probably do it this way, using /boot because of the legacy GRUB limitation, with swaps outside of the LVM to make sure they're contiguous:
    /dev/sda1     /boot
    /dev/sda2     swap 1
    /dev/sda3     LVM partition #1
    /dev/sdb1     swap 2
    /dev/sdb2     LVM partition #2
    /dev/sdc1     LVM partition #3
    Anyway, my questions are these:
    1) Do certain filesystems perform better when housed on volume groups?
    2) As the two internal IDE drives are faster than the external, I would like to "prioritize" those two logical volume partitions. Is it possible to do this?
    I know they seem like simple questions. I've searched, but can't come up with anything searching the obvious search strings. So sorry if I'm overlooking something.

    thanks for your fast reply!
    i moved from a replicated to a distributed cache:
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>example-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <!--
    Distributed caching scheme.
    -->
    <distributed-scheme>
    <scheme-name>example-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <class-scheme>
    <scheme-ref>unlimited-backing-map</scheme-ref>
    </class-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <!--
    Backing map scheme definition used by all the caches that do
    not require any eviction policies
    -->
    <class-scheme>
    <scheme-name>unlimited-backing-map</scheme-name>
    <class-name>com.tangosol.util.SafeHashMap</class-name>
    <init-params></init-params>
    </class-scheme>
    </caching-schemes>
    </cache-config>
    everything is now much faster, which is fine.
    however, if i try to put 200k instead of 100k objects in the cache (i do this in putAll batches of 10k each) i get a java.lang.OutOfMemoryError: Java heap space.
    as you can see the object has 2 ints only ... so the raw data of 200k objects is 1,6mb only.
    any suggestions?
    update:
    i increased the java vm heap memory to 1gb, now it works.
    however, i notice that for each 100k objects (2 integers each) jave consumes ~80mb heap memory. at about 700mb heap memory the gc kicks in.
    how can this be opimized? 8 bytes of payload is not that much, i doubt that i'm able to put REAL data records into the cache this way.
    thanks!
    Message was edited by:
    druid99

  • FF67-Manual Bank Reconciliation....question regarding automatic clearing

    Hi
    Dear experts,
    I have a question regarding manual bank reconciliation, everything is working fine as far as FF67 is concerned but I am having a problem with auto clearing for cheques received in, I know that an algorithm decides which field should be taken into account for deciding the clearing items as the cheques issued out is working fine by the cheque number, for cheques received in what algorithm should be used and in what field should I enter that in Tcode F-28 so that it clears the document automatically. OR will I have to create a algorithm, if so from where?
    Thanks to all who reply
    regards

    You can use an algorithm like bank reference, document search, etc. 021 or 020.  While making the entries for cheque receipt (customer) you have to put the cheque number in the reference field and at the time of clearing the transactions from the statement, please ensure that the same cheque number is i put in the clearing data.

  • This question regarding sending mail from sap

    hi to all,
    this question regarding sending mail from sap
    rt now iam able to send mails from 500 clint, what r the setting i need to do send mails from my another client 700,
    iam using ecc 6.0 with sql database
    regards,
    krishna
    Moderator message: FAQ, please search for available information before asking.
    locked by: Thomas Zloch on Aug 16, 2010 2:11 PM

    hi to all,
    this question regarding sending mail from sap
    rt now iam able to send mails from 500 clint, what r the setting i need to do send mails from my another client 700,
    iam using ecc 6.0 with sql database
    regards,
    krishna
    Moderator message: FAQ, please search for available information before asking.
    locked by: Thomas Zloch on Aug 16, 2010 2:11 PM

  • Question regarding the installation of a J2EE 6.40 Add-in

    Hi all,
    I would like to install a J2EE engine on a test instance of ECC 5.0 and have a few questions regarding the installation...
    Do I have to use the MASTER CD to first install the J2EE engine (Support Package 0) and then apply the latest support packages found on the SAP Marketplace?
    Or should be able to directly install the J2EE Add-In by using the latest support packages found on the SAP Marketplace?
    Best regards,
    Xavier Vermaut

    Thanks Bhavik for your reply,
    That's what I actually thought but I get the following problem... Here's what I wrote into my customer message... I am still waiting for an answer and would like to get this solved ASAP
    Dear SAP,
    We would like to install the J2EE 6.40 Add-In on our ECC 5.0 instance
    (TST) but get the following error message at the very beginning of the
    installation
    > Cannot find an installed ABAP system, which is a prerequisite for a
    > J2EE Add-In installation. The installation cannot continue.
    We checked the installation logs (sapinst_dev.log) and found the
    following :
    > Found these instances:
    > sid: MGR, number: 00, name: DVEBMGS00, host: erpqs1a
    > sid: TST, number: 10, name: DVEBMGS10, host: erpqs1a
    Why does the installation say that it can not find any ABAP systems when
    having previously found the 2 different instances running on this
    server?
    Would this problem be related to the fact that we have two instances on
    this server?
    Please find hereunder the way we performed this installation :
    01) Download of the 4 different parts of SAP J2EE Engine 6.40 SP 10
         (Solaris 10 - Oracle)
         Part I   : SAPINST10_0-20000121.SAR         (Solaris 64)
         Part II  : CTRLORA10_0-20000121.SAR         (Solaris 64)
         Part III : J2EERTOS10_0-20000121.SAR        (Solaris 64)
         Part IV  : J2EERT10_0-10001982.SAR          (OS Independant)
    02) Extract these 4 archives into /install/J2EE_640
    03) Check Java Version and Environment Variables
    04) Check Solaris Pre-Requisites
    05) Adapt "product.xml" as specified in OSS Note 697535 (IGS)
    06) Log in as 'root'
    07) Set DISPLAY environment Variable
    08) Move to the Installation directory
          ( /install/J2EE_640/SAPINST-CD/SAPINST/UNIX/SUNOS_64 )
    09) ./sapinst
    10) In the 'Welcome to Netweaver Installation' screen, select
          => Dialog Instance Finalization
    Any idea how to get this solved?
    Best regards,
    Xavier Vermaut
    Message was edited by: Xavier Vermaut

  • Question relating to Java printing and where to post...

    First of all I am not a developer, but I have a question regarding printing in an application that was developed using Java and am not sure where to post it. I'll pose the question than defer to a moderator to move it if necessary.
    Here's the issue and question:
    We are using an application that has a host/client structure on several systems that use Windows Vista and XP. I have been told by the support team for this application that they use Java for printing tasks. I don't know if the entire application or just the print engine were developed in Java.
    Here's the problem:
    Whenever I tell this app to print for the FIRST time after opening the app, it takes between 4 and 5 minutes before it brings up the print menu. Again, this ONLY happens the FIRST time I tell it to print, and subsequent print commands are processed in a more normal timeframe.
    If I close this app and immediatly open it again, it goes through the same silly wait time for the FIRST print only.
    Tech support had me update Java to the latest incarnation on all machines, but that changed nothing. The guy even had the GALL to blame Java for the issue, can you believe that???
    Anyway, my question to you people here - that develop with Java - is what in the world this app ( and I suppose Java ) could possibly be doing with this print job for 4 to 5 minutes?
    BTW... these are NEW systems, with Dual-Core processors and a minimum of 1.5 gigs ( the server has 3 gigs) of RAM. Oh, and what I am printing is simply text, no graphics, just plain simple text.
    Any and all comments/suggestions appreciated...
    bob

    Thx ChuckBing,
    I agree with you about the application being the most logical source of the problem, and I have already provided their support with all the details that you suggest.
    I'll look at the link you provided and see if anything there steers me in a direction....
    bob

  • Questions regarding creating the database

    Hi there,
    From the previous posting, http://forum.java.sun.com/thread.jspa?threadID=640415&tstart=15 someone gave me the "formula" of connecting to the database:
    java.sql.Connection  conn   =  java.sql.DriverManager.getConnection("jdbc:mysql://localhost/name_of_DB","user","password") Now just couple of questions regarding the formula :
    1) Obviously, if I want the name of my DB, then I will have to create my DB. Can somebody please tell me the protocol of creating the DB? And where do I create this DB (i.e can I create it anywhere in my application)? Or is it that I have to create a new database using MySQL itself?
    2) After creating a database, I would like to create multiple tables containing different datas. Is it possible to place the code creating these tables anywher in the application I want?
    Your ideas or advice would be much appreciated. Thank you in advance.
    Regards,
    Young

    1) Yes, you'll have to create the database using MySQL.
    2) You sure can once you have the database created with the proper rights assigned to your user. You can put the code anywhere you want but you may want to put it somewhere where it only ran once like on install if you're doing a standalone app.

  • Questions regarding Alert log & Control file

    Hello,
    This is my first post to this forum. I'm new at administrating a database 11.2.0.3. I have couple of questions regarding Oracle DB.
    1)Which one is preferred method among the two?
       Checking alert log fie manually or automating a shell script for checking alert log file.
    2) Does setting CONTROL_FILE_RECORD_KEEP_TIME parameter to zero make the database unrecoverable, if RMAN recovery catalog is not used?
    Any help will be highly appreciated. Thanks in advance.
    Regards

    1)Which one is preferred method among the two?
       Checking alert log fie manually or automating a shell script for checking alert log file.
    That depends on your comfortability - But checking alert log is good.
    2) Does setting CONTROL_FILE_RECORD_KEEP_TIME parameter to zero make the database unrecoverable, if RMAN recovery catalog is not used?
    Yes - you cannot recover your database if it set zero and doesn't have recovery catalog.

  • Questions regarding authentication used in SRDemo ADF Tutorial

    Hello,
    I am currently in the process of learning ADF with the help of SR Demo application from OTN.
    I have a few questions regarding the authentication used in SRDemo ADF Tutorial.
    1) Why do I need to specify the list of the all the users in the jazn-data.xml file with the name and credential attributes?
    2) How I change the authentication mechanism to point to database tables.(If it's not using, correct me if I am wrong).
    3) The prompt which I am getting for authentication is like OEM authentication. Can I change this to a normal authentication login page?
    4) As an option I would like to pass the person who logged in to windows , should be able to log in to the application. How can I pass the username(which I have in a java string) to my application and get logged in?
    Any help is highly appreciable.
    Thanks

    Read these two, should help
    http://technology.amis.nl/blog/?p=1462
    http://stegemanoracle.blogspot.com/2006/02/using-custom-login-module-with-jdev.html
    But to be honest, I couldn't get it working.

Maybe you are looking for

  • RE: Capture Problems on External Drive/Audio drift over time

    Greetings FCP gurus - I posted earlier on the LAFCPUG board and now seek worldwide assistance with a problem I can't seem to resolve. Been working in FCP since 1.5. Been successfully capturing on external FW devices for over 4 years with an old 500 M

  • Charts of CR2008 not working in CR2011

    Hello everyone, I have a problem with Crystal Reports. I created reports with CR 2008 wich worked fine and looked as they should, but when I open the same report in CR2011 the charts are looking really bad. For instance a bar chart in CR2008 has a no

  • JSF 2.0 Ajax navigation problem

    Hello I have some problem with new AJAX functionality of JSF 2.0. I want to do the navigation between two pages via AJAX because it looks more smooth and I have the possibility to show a wait screen during the server request. Here is a short example.

  • SQL SERVER 2000 to 8i Migration

    I am trying to migrate tables, stored procedures, views etc from a SQL2K database to an ORACLE 8i database but it gets stuck trying to load the source model with an ORA-01401: inserted value etc error on one of the tables from the master database (sy

  • Prevent infotype 0416 from being created under certain conditions

    Hi, Based on an advanced leave formula I can determine at runtime whether or not the infotype 416 record should be created or not. The formula is working. What is not working is that I need to find a way to prevent the 416 record from being created w