Recommended way for Oracle SDO_GEOMETRY

I'm going to be using coldfusion<>oracle spatial db to retrieve location data. Oracle does this with an SDO_GEOMETRY data type which holds location data, in my case somewhere in there will be point data which will have a longitude and latitude.
since i intend to plug this all into a datagrid and do crud type stuff i was wondering what the recommended way is for coldfusion to define and use this type for crud type sql statements
Thx.

Why don't you start by taking an AWR report from those two hours so you can see what is the bottleneck for your system ?

Similar Messages

  • What is the recommended way for persisting JMS messages?

    What is the recommended way for persisting JMS messages?. As per the IMQ admin documentation , using the default built-in persistence type which is through unix flat files is much efficient and faster, compared to the database persistence .
    Tried setting up the jdbc stuff for database persistence on iAS 6.5 . I am getting the following
    error .
    [24/Apr/2002:16:09:20 PDT] [B1060]: Loading persistent data...
    [24/Apr/2002:16:09:21 PDT] Using plugged in persistent store: database connection
    url=jdbc:oracle:thin:@dbatool.mygazoo.com:1521:qa1 brokerid=ias01
    [24/Apr/2002:16:09:23 PDT] [B1039]: Broker "jmqbroker" ready.
    [24/Apr/2002:16:11:56 PDT] ERROR [B4012]: Failed to persist interest
    SystemManager%3ASystemManagerEngine%2BiMQ+Destination%0AgetName%28%29%3A%09%09SM_Response%0AClass%3A%09%09%09com.sun.messaging.Topic%0AgetVERSION%28%29%3A%09%092.0%0AisReadonly%28%29%3A%09%09false%0AgetProperties%28%29%3A%09%7BJMQDestinationName%3DSM_Response%2C+JMQDestinationDescription%3DA+Description+for+the+Destination+Object%7D:
    java.sql.SQLException: ORA-01401: inserted value too large for column
    [24/Apr/2002:16:11:56 PDT] WARNING [B2009]: Creation of consumer SM_Response to destination 1
    failed:com.sun.messaging.jmq.jmsserver.util.BrokerException: Failed to persist interest
    SystemManager%3ASystemManagerEngine%2BiMQ+Destination%0AgetName%28%29%3A%09%09SM_Response%0AClass%3A%09%09%09com.sun.messaging.Topic%0AgetVERSION%28%29%3A%09%092.0%0AisReadonly%28%29%3A%09%09false%0AgetProperties%28%29%3A%09%7BJMQDestinationName%3DSM_Response%2C+JMQDestinationDescription%3DA+Description+for+the+Destination+Object%7D:
    java.sql.SQLException: ORA-01401: inserted value too large for column
    Any thoughts?

    From the output, you are using imq 2.0. In that release
    the key used to persist a durable subscriber in the database
    table has a limit of 100 characters. The output shows that
    your value is:
    SystemManager%3ASystemManagerEngine%2BiMQ+Destination%0AgetName%28%29%3A%09%09SM_Res
    ponse%0AClass%3A%09%09%09com.sun.messaging.Topic%0AgetVERSION%28%29%3A%09%092.0%0Ais
    Readonly%28%29%3A%09%09false%0AgetProperties%28%29%3A%09%7BJMQDestinationName%3DSM_R
    esponse%2C+JMQDestinationDescription%3DA+Description+for+the+Destination+Object%7D:
    which is much longer than 100 characters.
    You might want to shorten the string you use for the
    durable name.
    And yes, the default file-based persistence store is
    more efficient when compared to the plugged-in persistence
    through a database.

  • Recommended throughput for Oracle data warehouse

    Hi, I know up front this is going to be a vague question...but I'm trying to determine approximate I/O bandwidth for a data mart server. Right now we're hosting 3 or 4 different marts on it, but that number is going to increase.
    Oracle's DW "2 day" class recommends starting with either maximum throughput from user queries, or basing it off of batch windows. Right now the server is barely used for end user queries, as we haven't yet implemented a BI tool to allow users easy access (that's underway right now). So I find it hard to base any info on that. However, it's on the way, and I'm in charge of the BI took (OBIEE). I'm having nightmares that we get OBIEE deployed, and our queries end up taking 5 minutes each to get answers... Right now, on the system basically by myself, if I do a simple "select sum(amount) from fact_ledger", where fact_ledger is a 1 Gb table (with 40 million rows), it takes almost a full minute to run. It feels like I could add this up by hand and get an answer faster...and this certainly doesn't compare with other Oracle marts / DWs I've worked on in the past.
    From a batch window standpoint, all I can say is that it feels really, REALLY too slow to me. Right now, some jobs that start with a 40 million row table and join it to 6 or 7 other small tables (looking up surrogate keys) and writing to a non-logged, non-indexed output table takes over 2 1/2 hours to complete. To me this should be a 15 minute job.
    We've asked IT to do a "root cause analysis" of why performance is so bad - but as part of that, the architecture group wants something more concrete than "it just feels way too slow". So does anyone have some general guidelines they can provide? I guess our detailed info would be:
    - three marts, each of which has a fact table around the 30 - 60 million row level
    - simple "join 30 million row staging table to look up surrogate keys" and writing results is taking 2.5+ hours
    - we expect at some point to have mabe 50 - 100 users running data concurrently (spread across the marts)
    - users will be performance both canned and ad-hoc analysis against it...and they are high level business users, aren't going to be happy with waiting 2 minutes for a simple answer
    My start was to swag this as requiring 6 CPUs or so, which would indicate (according to Oracle's best practice docs) of needing somewhere betweeen 1.2 GB/s to 2.4 GB/s throughput. I'm assuming if it takes almost a full minute to read a 1 GB table, that our IO is currently 60 to 120 times too slow. Does that make sense?
    Thanks and sorry for the lack of details...we just don't know yet.
    Thx,
    Scott

    Why don't you start by taking an AWR report from those two hours so you can see what is the bottleneck for your system ?

  • Recommended books for Oracle 10g

    I'm new to this group, so my apologies in advance if this is not the appropriate group for this question. If it is not, please steer me in the right direction. Thanks!
    Here's my situation: I'm currently a Microsoft SQL Server developer and have been for about 6 years, so I'm fairly conversant with SQL. Our company is in the process of purchasing an application that runs under Oracle 10g, and I need to learn as much as I can as quickly as I can about 1) being the Oracle DBA for our (small) company, and 2) developing in Oracle SQL.
    Are there any specific books some of you would recommend for my particular situation (beginning level Oracle user, intermediate/advanced in SQL)? Any and all advice/suggestions/help will be gratefully accepted.
    Thanks in advance --
    Carl

    First thing I would recommend is going in the OTN and download the 2 day dba article
    and the top 20 Oracle 10g features
    Top 20:
    http://www.oracle.com/technology/pub/articles/10gdba/index.html
    2 day dba:
    http://www.oracle.com/technology/pub/columns/kestelyn_manage.html

  • Recommended books for Oracle SQL power user

    I am an OCA 11g preparing for 1Z0-047 [Oracle Database SQL Certified Expert]. Passing 1Z0-047 is not my sole purpose. I wish to become SQL power user.
    What books do you recommend me except that from Steve O'Hearn? I saw there are quite a lot of advanced SQL books on Amazon. Which one a must have without PL/SQL or other programming language?

    Other than Steve O'Hearn's book ?   "Pro Oracle SQL"  (Apress)  by Karen Morton et al.
    Hemant K Chitale

  • Recommended configuration for Oracle VM Server and VM Manager

    Hi,
    Currently we have two dedicated DELL servers (VM server and VM Manager) for which are building an Oracle stack environment.
    We are facing a problem which we didn't find answer from Oracle docs or via Google. The problem is that the virtual machines we create in VM Manager can't be connected to the outside world, and we think it's because our host (Hetzner) does not allow ports eth1-eth3 to be connected to the servers, unless we have these servers in the same rack and connected via switch.
    Our question is, is it recommended configuration to have VM Server and VM Manager servers in the same rack and connected via switch?
    If yes, the next question is, if we need in the future to add one more dedicated DELL server to our configuration to host virtual machines, is it possible if the new third dedicated DELL server is not connected via switch to the two dedicated DELL servers we have already have?
    If no, the next question is, what could be the recommended configuration and solution to our problem in our case that ports eth1-eth3 cannot be connected to the servers?
    Please do not hesitate to ask if you need more information. I do appreciate your time and expertise. Thanks.

    The architectural environment is now done like on this graphic. For security reasons Hetzner hosting service does not allow ports eth1-eth3 to be connected, but in internal network under switch they do.

  • Is there a way for Oracle to ignore passwords?

    Hello..
    Just curious if there is any way to set Oracle to do the following..
    I want to connect as a certain user (connect user/password@database)
    I want Oracle to let me connect as long as the user name passed corresponds to a user in Oracle with connect privs, no matter if I pass the correct or incorrect password?
    I know that sounds like a strange thing to want to do.. But has anyone found a way to do that?
    Thanks,
    BP

    Read the documentation on 'external' passwords (which makes Oracle use OS authentication rather than it's internal authentication). Perhaps that solves your problem...

  • [Gnome 3.8] Recommended way for upgrade? [SOLVED]

    Hi folks,
    upgrading Gnome from version 3.6 to 3.8 by using "pacman -Syu" doesn't seem to be the best way to do this. When I gave it a try yesterday, pacman asked me on a lot of packages (Gnome 3.6) if I want to replace them with different packages from Gnome 3.8 (looks as if the packages are arranged/renamed in a new way). In the end, pacman stopped because of some conflicting dependencies.
    So, would it be better to re-install Gnome using
    # pacman -S gnome gnome-extra
    although Gnome is already installed on my system?
    Or do you know any other appropriate way to upgrade from 3.6 to 3.8?
    Thanks
    Last edited by swordfish (2013-04-12 20:35:45)

    Inxsible wrote:
    WRONG! !
    pacman -Syu is the only way you should be upgrading. Remember, partial upgrades are not supported and will lead to a clusterfuck
    Actually, like the OP, I originally tried #pacman -Syu, which because of at least some Gnome 3.8 packages hitting testing, which I have enabled, would not succeed mainly because of the changes made within gnome-games and it's dependency with gnome-games-extra-data. I had to manually remove gnome-games-extra-data first before running #pacman -Syu again and answering yes to all the name changed gnome-games to be replaced by the new counterparts. I rebooted after that and ran the pacman upgrade command again which came up clean. I was still experiencing some little issues here and there and it didn't seem like everything 3.8 got pulled in and subsequent pacman runs were wanting to downgrade some of the packages I just upgraded. I read in this thread below, where folks had been testing gnome 3.8 from unstable, a suggestion after it hit testing, that running #pacman -S testing/gnome-extra was necessary to get everything working:
    https://bbs.archlinux.org/viewtopic.php … 1#p1257391
    When I later ran #pacman -S testing/gnome testing/gnome-extra, it wasn't really a matter of cherry-picking anything anymore than running that without testing enabled would be. In this case, it was merely reinstalling all the gnome and gnome-extra newest stuff (and maybe pulling in anything that somehow got missed) because of the quirks I was seeing with pacman's behavior. Normally, pacman seems to pull from testing fine if something newer is there. I keep the testing repos enabled and, at least once a day, run #pacman -Syu to obtain my updates / upgrades. I also install packages that are new to my system with #pacman -S pkgname, normally allowing pacman to pull in whatever it decides per it's conf file. I wasn't recommending that the OP turn on testing, grab what he wanted without doing a full upgrade first, then turn off testing, because I know that will quickly lead to a borked system.
    Last edited by sidneyk (2013-04-12 16:27:40)

  • Recommended RAID for Oracle VM server

    Hi everyone, im planning on installing OVM manager and server on 5 sun x3-2. 1 Manager and 4 servers... I have to questions:
    1.- Which RAID config would be the best for the solution since im using 4X300GB in each server.
    2.- Also I have some spare space on a ZFS 7320, but this sun x3-2 servers dont have fiber channel, is ethernet fast enough for a production enviroment?
    thanks

    Try looking at these Oracle Technical White Papers for some guidance on RAID and Storage configurations. As a general rule you will always get the best performance and protection from a RAID 10 (1+0) configuration. I know from my own experiences of running VM's on RIAD 5's, 6's, and 10's that I will only run VM's on Raid 10's. The RAID 5 and 6 are far too slow for Production VM's . If you do use Raid 5 and 6 for Test/Devl VM's be aware that you may have serious disk I/O latency issues.
    Good luck, Russell
    These are more recent documents that may help.
    http://www.oracle.com/technetwork/server-storage/vm/ovm32-deployment-wp-1936835.pdf
    http://www.oracle.com/technetwork/server-storage/vm/ovm3-backup-recovery-1997244.pdf
    These are older documents that may help.
    http://www.oracle.com/technetwork/articles/systems-hardware-architecture/ovm-blade-ref-conf-192667.pdf
    http://www.oracle.com/technetwork/articles/systems-hardware-architecture/vm-solution-using-zfs-storage-174070.pdf

  • Recommended architecture for oracle bi ee

    Hi,
    we are trying to decide whether it is better to separate the presentation services from the bi server & host them on separate machines. We would like to know what is the recommended architecture? Is it better to install the webserver,oracle presentation server on one machine & the java host& B1 server on another machine?
    thanks.

    I don't know what Oracle official recommended architecture is.
    However, the latest benchmarks from Oracle are based on BI/PS on each machine: http://rnm1978.wordpress.com/2009/09/18/collated-obiee-benchmarks/
    I'd say that's a pretty strong reason to go with BI/PS on each machine.
    FWIW, we run 2xBI/2xPS but given chance I'd probably redeploy to 4xBI&PS

  • Books and links for Oracle financial certifications

    Hi All
    Are there any recommended books for oracle finacials certifications.
    Can anyone give me any link where i can find the complete instructions and guide lines about the certifications.
    Thanks and Regards
    Message was edited by:
    user518322

    http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=39
    Check E-Business Suite 11i Application Certification

  • Discoverer 4i to OBIEE migration methodology - Oracle Recommended Way

    Hi,
    We are looking into a scenario where we are required to migrate large number of Discoverer 4i reports to OBIEE. We have good number of custom reports(approx 200) and views too that are to be migrated.
    Am aware about the migration assistant that comes along with OBIEE 10.1.3.4. It's a very useful tool, BUT we need to consider the following facts before proceeding:
    1. The tool is not for migrating Oracle E-Business Suite Discoverer reports (BIS).
    2. The minimum version of Discoverer EUL that can be used with the Assistant is 10.1.2.n.
    3. The version of the RPD file that is generated is dependent upon the installed version of Oracle BI EE that the Assistant is run against but must be a minimum of 10.1.3.4.
    What is the Oracle recommended approach... complete rebuild of the repository in OBIEE/ Migration from 4i -&gt; 9i -&gt;10g and then to 10.1.3.4/...?
    Appreciate inputs from the product gurus and experts.
    Regards
    KSK
    Edited by: user633620 on Sep 30, 2008 10:04 PM

    Ok firstly I'm not sure there is an Oracle recommended approach for migration,.
    However based on my experience at an eBS client - the Discoverer workbooks were very different from what we wanted to achieve with our new OBIEE dashboards. The Discoverer folders were based on complex views which joined relevant OLTP tables together and contained both Dimension attributes and Facts in the same folder. They were used by our users mainly to extract data at the transaction level into Access databases, this is something we were keen to move away from, and adopt a more Analytic top down approach, drilling into the detail as in Disco. This did require some additional work with our user base to see what they were actually doing in Access and if we could do this in OBIEE, which in most cases we could (although we did use Apex in places too).
    We looked at the migration tool, and it seemed to do a decent enough job technically, however for the reasons above ie. the move towards a top down solution we decided on a manual build approach.
    The model for OBIEE had to be different, we broke down the views into their component Dimensions and Facts for - whilst the Physical model was 3NF, we ensured the Business model was a pure Star Schema (this was recommended by Oracle). The Subject Areas were kept quite small, and aligned to the business process, no more than 10-12 logical tables with a couple of fact logical tables, and most of all our dev teams had a set of guidelines to follow to ensure we had consistent repositories (Object Names, Ordering, Logical Levels etc..)
    If you are using eBS i would have a look at some of the BI Applications Oracle have produced, you may get a good idea for best practise and standards for producing a repository from them. Also the new 10.1.3.4 dashboard has some examples of good layout, and some nice tips (ie. One Answer per Dashboard page, Prompts at the top etc...etc..)
    Good Luck.

  • What is the recommended way to do multiple channel, single point sampling for control with an NI PCI-6255 in RLP?

    Hello,
    I am writing a driver for the M-series NI PCI-6255 for QNX. I have downloaded the MHDDK and have all the examples working. I have also enhanced the examples to do interrupt handling (e.g. on AI_FIFO interrupt or DMA Ring Buffer interrupt). My ultimate goal is to write a driver that I can use for closed-loop control at 500 Hz using all 80 channels of the NI PCI-6255. I may also need to synchronize each scan with a NI PCIe-7841R card for which I've already written a driver. I want an interrupt-driven solution (be it programmed I/O on an interrupt or DMA that generates an interrupt) so that the CPU is available to other threads while the 80 analog inputs are being read (since it takes quite a while). I also want to minimize the number of interrupts. Basically, I will need to collect one sample from all 80 channels every 2 milliseconds.
    There are many different options available to do so, but what is the recommended technique for the NI PCI-6255 card? I tried using the AI FIFO interrupt without DMA, but it seems to interrupt as soon as any data is in the AI FIFO (i.e. not empty condition), rather than when all 80 channels are in the FIFO, so more interrupts are generated than necessary. I tried using DMA in Ring Buffer mode to collect a single sample of 80 channels and interrupting on the DMA Ring Buffer interrupt, which appears to work better except that this technique runs into problems if I cannot copy all the data out of the DMA buffer before the next AI scan begins (because the DMA will start overwriting the buffer as it is in ring buffer mode). If the DMA is not in ring buffer mode or I make the ring buffer larger than one 80-channel sample then I don't have a way to generate an interrupt when one sample has been acquired (which I need, because I'm doing control).
    I saw something in the documentation about a DMA Continue mode in which it looks like you can switch between two different buffers (by programming the Base Count/Address with a different address than the current address) automatically and thereby double-buffer the DMA but there is no real documentation or examples on this capability. However, I think it would work better than the Ring Buffer because I could interrupt on the DMA CONT flag presumably and be copying data out of one buffer while it is filling the other buffer.
    Another option would be DMA chaining, but again, I cannot find any information on these features specific to the NI DAQs.
    I tried interrupting on AI STOP figuring that I could get a single interrupt for each scan, but that doesn't appear to work as expected.
    I know that DAQmx on Windows has the ability to do such single sample, multiple channel tasks at a fixed rate so the hardware must support it.
    Any suggestions would be appreciated.
    Thanks.
    Daniel Madill

    Hello,
    The interrupt that will happen nearest the times that you need is the AI_Start_Interrupt in the Interrupt_A group. This interrupt will occur with each sample clock. By the second time this interrupt fires, the AI FIFO should have the samples from the first conversion. If it is easier to use programmed IO, you can read the samples out of the FIFO until you get all 80.
    Additionally, you can set the DMA to send samples as soon as the FIFO is no longer empty...instead of waiting for half full or full. This change will reduce latency for your control loop. You can set AI_FIFO_Mode in AI_Mode_3_Register to 0. By the second time this interrupt fires, you should be able to check how much data is in the DMA ring buffer and read the 80 samples when they are available. You can make the ring buffer larger than 80 samples if you see data getting overwritten.
    There is no interrupt associated with 80 samples being available in the FIFO or 80 samples being available/transferred by DMA to the host. X Series has much more flexibility with these interrupts.
    I hope this helps!
    Steven T.

  • Third Party Printing Software recommendations for Oracle R12 AP Check printing

    We currently use Formscape for check printing in Oracle 11i. upgrade to R12 , Oracle AP checks are PDF and we are looking for alternatives for third party tools.
    Any recommendations are appreciated.

    Review this note, you will get source of check printing program
    R12: Master Troubleshooting Guide for Oracle Payables Check Printing issues (Doc ID 1353280.1)
    thanks

  • Note 830576 - Parameter recommendations for Oracle 10g

    hi all DBA experts.
    I am not good familiar with Oracle database while i read a Note 830576 - Parameter recommendations for Oracle 10g. in which SAP General Recommendation:
    You should delete obsolete initialization parameters from the profile.
    To determine which obsolete parameters are currently set, proceed as follows:
    SQL> SELECT NAME FROM V$OBSOLETE_PARAMETER WHERE ISSPECIFIED = 'TRUE';
    when i execute above command then result is no rows selected
    while there are many parameters in above SAP Note which are already obsolete and not set in initSID.ora file.
    for exp.  the parameter OPTIMIZER_INDEX_COST_ADJ  is showing
    #### OPTIMIZER MODE
    #optimizer_index_cost_adj = 10
    as you know that this parameter is very important regarding System Performance.
    now please guide me . I have to set these parameter or no need while there is not showing any parameters against obsolete command.
    waiting you valuable reply.
    Regards,

    hi both,
    thanks for knowledge sharing with me other SDN users,
    Dear Orkun,
    Ok. At this stage, I can recommend you that apply what they have suggested,
    in the message. So, you already did a part of it by configuring
    Oracle parameters, already.
    SAP support sent me this file (PRD_Parameters)
    *** INFORMATION  1 ***
    *** INFORMATION  2 ***
    *** INFORMATION  3 ***
    *** INFORMATION  4 ***
    *** INFORMATION  5 ***
    *** INFORMATION  6 ***
    *** INFORMATION  7 ***
    *** INFORMATION  8 ***
    *** INFORMATION  9 ***
    *** INFORMATION 10 ***
    *** INFORMATION 11 ***
    _b_tree_bitmap_plans
    _fix_control (4728348)
    event (10753)
    event (38087)
    event (10183)
    optimizer_index_cost_adj
    star_transformation_enabled
    event (10027)
    event (10028)
    event (10411)
    event (10629)
    event (14532)
    _fix_control (5705630)
    _fix_control (5765456)
    _optimizer_mjc_enabled
    _sort_elimination_cost_ratio
    event (10091)
    event (10142)
    event (38068)
    event (38085)
    event (44951)
    parallel_execution_message_size
    parallel_threads_per_cpu
    query_rewrite_enabled
    log_archive_dest_1
    log_archive_format
    max_dump_file_size
    optimizer_features_enable
    log_archive_dest
    _push_join_union_view
    _cursor_features_enabled
    _first_spare_parameter
    event (10049)
    db_writer_processes
    parallel_max_servers
    db_cache_size
    pga_aggregate_target
    processes
    sessions
    dml_locks
    job_queue_processes
    log_checkpoint_interval
    remote_login_passwordfile
    sga_max_size
    shared_pool_reserved_size
    sort_area_retained_size
    sort_area_size
    statistics_level
    workarea_size_policy
    they only highlighted these following parameters from above
    **** INFORMATION  8 ***     DB Patchset: 10.2.0.4.0
    **** INFORMATION  9 ***     DB Mergefix: 0 (released before 2008-07-11)
    FYI... recently, i applied the Oracle Patches 10.2.0.4 in this sequence
    MS Windows x86-64 (64-bit)
    Patchset_10204_MSWIN-x86-64aa.bin
    Patchset_10204_MSWIN-x86-64ab.bin
    Patchset_10204_MSWIN-x86-64ac.bin
    OPatch
    OPatch_10205_Generic_v0.zip
    Generic (32-bit / 64-bit)
    p8350262_10204_Generic.zip
    p7592030_10204_WIN_GENERIC.zip
    p9254968_10204_WIN_GENERIC.zip
    10204_Patch44_MSWIN-x86-64.zip
    p9584028_102040_Generic.zip
    p9843740_10204_Generic.zip
    and please tell me , still i have to apply highlighted parameters or now no need.
    Regards,

Maybe you are looking for

  • What type of internal drive do I need?

    I would like to install a second hard drive in my G4 desktop machine. What drive type should I be looking at..."eide", "ATA", "SATA", or ? I don't know what these various designations mean, or which would be the right solution for an older G4 like mi

  • Making a PDF X3 file for printing in only 1 colour (black)

    Hi, Sorry but it seems I need again some help. From InDesign CS 3 I am making (Print to Adobe PDF) a PDF file for a printing house (books and magazines). I need to do a PDF X3 compatible file, and among other settings it checks that indeed the file h

  • My Lumia 820 just died :(

    I was texting friends yesterday evening with my phone plugged in and suddenly it froze. The buttons would not work. Not the off or on button, not the camera button, not even the volume button. It was completely frozen on a text. I decided to leave it

  • Why does Sun Studio need NetBeans?

    Hi! Why does Sun Studio need a NetBeans installation for running successfully? If I am not too wrong, I believe Sun Studio has the same relation to the NetBeans project as Star Office has to OpenOffice.org? Now, coming to the solution. I am using Sun

  • Upgraded to FF 15.0 lost all my bookmarks, how can I restore them?

    running Vista and FF desktop