Larger internal HD?  Data x-fer

I have a 160 GB internal drive in my MacBook, but it is full between lots of music, photos and Parallels Desktop for Windows based stuff. Can I get a larger drive for the MacBook? Can the data easily be transferred from the existing drive to the new one? What about all the Parallels Desktop/Windows data?
Thanks.

Yes, you can upgrade the drive very easy. I bought an external 2.5" USB 2.0 enclosure to put the stock drive in. I installed the new drive after putting the stock drive in the enclosure, then plugged the external drive into the USB port, booted from it and used SupperDuper to clone the external to the new internal drive. It works quite well and you end up with a nice small potable external drive.
Here are some drives to consider. You must use an SATA drive.
http://eshop.macsales.com/shop/hard-drives/2.5-Notebook/
Installation video >>
-Bmer
Mac Owners Support Group - Join us @ MacOSG.com
  Mac611 Mobile Mac Support - about.Mac611.com
   iTunes:MacOSG Podcast | YouTube.MacOSG.com
                   An Apple User Group 
Have an iPhone or iPod touch? Enter Mac611.com in Safari on it for 'mobile Mac support.'
Note: I receive no compensation for product endorsements.

Similar Messages

  • How to move large number of internal table data to excel by program

    Hi,
    Iam working on a classical report wherein my requirement is:
    Have around 25 internal tables which I am displaying using a selection screen.I need to transfer this all internal tables data to Excel file by executing.
    Now, let me know how can I transfer all those to excel by execution.
    P.S.: GUI_DOWNLOAD or any other excel download related FMs are used to transfer for single/fewer internal tables.
    But here I need to download 25 internal tables data through program.
    How can I meet the requirement..?
    Kindly advice.
    Thanks,
    Shiv.

    Hi,
    Refer to the following code:
    *& Report  ZDOWNLOAD_PROGRAM
    report  zdownload_program.
    parameter : p_path type rlgrap-filename default 'C:\Pdata.xls'.
    data : gt_output   type standard table of trdirt,
           wa_output   type trdirt,
           p_filen     type string.
    at selection-screen on value-request for p_path.
      clear p_path.
      call function 'F4_FILENAME'
        importing
          file_name = p_path.
    start-of-selection.
      select * from trdirt
               into table gt_output
               where name like 'Z%'
                  or name like 'Y%'.
    end-of-selection.
      move : p_path to p_filen.
      call function 'GUI_DOWNLOAD'
        exporting
      BIN_FILESIZE                    =
          filename                        = p_filen
       filetype                        = 'ASC'
      APPEND                          = ' '
       write_field_separator           =
    cl_abap_char_utilities=>horizontal_tab
      HEADER                          = '00'
      TRUNC_TRAILING_BLANKS           = ' '
      WRITE_LF                        = 'X'
      COL_SELECT                      = ' '
      COL_SELECT_MASK                 = ' '
      DAT_MODE                        = ' '
      CONFIRM_OVERWRITE               = ' '
      NO_AUTH_CHECK                   = ' '
      CODEPAGE                        = ' '
      IGNORE_CERR                     = ABAP_TRUE
      REPLACEMENT                     = '#'
      WRITE_BOM                       = ' '
      TRUNC_TRAILING_BLANKS_EOL       = 'X'
      WK1_N_FORMAT                    = ' '
      WK1_N_SIZE                      = ' '
      WK1_T_FORMAT                    = ' '
      WK1_T_SIZE                      = ' '
      WRITE_LF_AFTER_LAST_LINE        = ABAP_TRUE
    IMPORTING
      FILELENGTH                      =
        tables
          data_tab                        = gt_output
      FIELDNAMES                      =
       exceptions
         file_write_error                = 1
         no_batch                        = 2
         gui_refuse_filetransfer         = 3
         invalid_type                    = 4
         no_authority                    = 5
         unknown_error                   = 6
         header_not_allowed              = 7
         separator_not_allowed           = 8
         filesize_not_allowed            = 9
         header_too_long                 = 10
         dp_error_create                 = 11
         dp_error_send                   = 12
         dp_error_write                  = 13
         unknown_dp_error                = 14
         access_denied                   = 15
         dp_out_of_memory                = 16
         disk_full                       = 17
         dp_timeout                      = 18
         file_not_found                  = 19
         dataprovider_exception          = 20
         control_flush_error             = 21
         others                          = 22
      if sy-subrc <> 0.
    message id sy-msgid type sy-msgty number sy-msgno
             with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      endif.
    Regards
    Rajesh Kumar

  • Performance Issue in Large volume of data in report

    Hi,
    I have a report that will process large amount of data, but it takes too long to process the data into final ALV table, currently im using this logic.
    Select ....
    Select for all entries...
    Loop at table into workarea...
    read table2 where key = workarea-key binary search.
    modify table.
    read table2 where key = workarea-key binary search.
    modify table.
    endloop.
    Currently i select all data that i need (only fields necessary) create a big loop and read other table to insert it to the fields in the final table
    Edited by: Alvin Rosales on Apr 8, 2009 9:49 AM

    Hi ,
    You can use field symbols instead of work area.
    If you use field symbols there is no need of modify statement.
    Here are two equivalent code:
    1) using work areas :
    types: begin of  lty_example,
    col1 type char1,
    col2 type char1,
    col3 type char1,
    end of lty-example.
    data:lt_example type standard table of lty_example,
           lwa_example type lty_example.
    field-symbols : <lfs_example> type lty_example.
    suppose if you have the following information in your internal table
    col1 col2 col3
    1      1    1
    1      2    2
    2      3    4
    Now you may use the modify statement using work areas
    loop at lt_example into lwa_example.
    lwa_example-col2 = '9'.
    modify lt_example index sy-tabix from lwa_example transporting col2.
    endloop.
    or better using field-symbols:
    loop at lt_example assigning <lfs_example>
    <lfs_example>-col2 = '9'.
    *here there is no need of modify statement.
    endloop.
    The code using field-symbols is about 10 times faster tahn using work areas and modify statement.

  • Time Machine Slow & Backs Up Large Amounts of Data - The Fix

    This is about Time Machine becoming very slow and taking hours to backup and trying to backup large amouts of data each time.
    I spent a month on the phone with 2nd level Tech support and engineering - every other day.
    I have 4 computers (one was a clients), all with different backup systems: Time Capsule over CAT 6 Gigabit Ethernet through a Thunderbolt monitor, internal eSata to external 6 TB RAID 5, Wireless to Time Capsule, Firewire 800 to External 2 TB RAID 1 dive.
    With the install of 10.8.5 (the itermediate update to 10.9) I immedialety had Time Machine problems on all four computers. As I said, I spent a month talking with Apple. Also during this time I purchased 2 new MacBook Pro 13  Retinas. Installed Norton Internet Security 5 on them and they had the same problem. The new Retina's had 10.9 on them, so this is about 10.8.5 and 10.9.
    After doing Data Capture on 3 of the computers for engineering (a new MBPro13R and my MacPro, and a MBPro 13 Mid-2012), engineering finally figured out what was happening.
    Norton Internet Sucrity 5 for Mac is not longer compatible. As my 2nd level tech said: " Engineering told him, engineering needs to call Norton and figure out what they did to their program to cause this."
    The fix is to completely remove Norton and all it's components (using the uninstall app inside the Symantec folder). Once you do that, Erase your backup drive, Restart everything, Re-connect your computer to your drive - then it immediately works perfectly.
    Do Not Re-Install Norton.
    I have now done this exact operation on 7 different machines. Time Machine works instantly. Time Machine is not the same program from 10.8 - it has been completely rebuilt, so I doesn't backup in the same way. But it works perfectly on all my computers now.
    Don't forget the first backup will take hours since it is starting over clean and must do a complete backup the first time. After that it works fine.
    I'm not going to put Norton back in my computers until I find that Norton has acknowledged this problem, or they come out with Norton Internet Security 6.
    I reposted this here because on another post it was on the 20 page.
    Hope this helps

    Guitarfx wrote:
    Time Machine has worked properly for me for several months. However, in the last week or two I have noticed some very strange behavior. Basically, Time Machine wants to backup virtually my entire system every few days or so.
    How often do you let TM do backups? If it hasn't backed up for several days, it may indeed be doing a new, full, backup. Apple doesn't document this, but many folks believe it's about 10 days.
    Download the +Time Machine Buddy+ widget from: http://www.apple.com/downloads/dashboard/status/timemachinebuddy.html. It shows the messages from your logs for one TM backup run at a time, in a small window.
    It first shows the most recent backup. Navigate to one of the large backups, and examine the messages. If you see something there that seems suspicious, copy and post the messages here (be sure to get them all, as sometimes they overflow the small window).
    I presume that for whatever reason, Time Machine believes my files are changing and therefore proceeds backing them all up. I can't really think of anything special I have done with my system in the last month which would cause this, but have tried formating my Time Capsule and starting over with a fresh backup--to no avail.
    There are some applications that cause large backups, often because they use a single file, often a database, to store their data. Entourage is a common one: every time you send or receive a single message, the whole database is changed, and will be backed-up the next time. Apple mail, of course, stores messages individually to prevent this.
    FileVault is similar, in an even bigger way: it converts your entire Home Folder into a single, encrypted disk image. So any change to anything in your Home Folder is treated as a change to the encrypted image. TM minimizes the impact, though, by only backing it up when you log out.
    So consider this while examining the results from TimeTracker, as VA2020 recommended.

  • Error in Generating reports with large amount of data using OBIR

    Hi all,
    we hve integrated OBIR (Oracle BI Reporting) with OIM (Oracle Identity management) to generate the custom reports. Some of the custom reports contain a large amount of data (approx 80-90K rows with 7-8 columns) and the query of these reports basically use the audit tables and resource form tables primarily. Now when we try to generate the report, it is working fine with HTML where report directly generate on console but the same report when we tried to generate and save in pdf or Excel it gave up with the following error.
    [120509_133712190][][STATEMENT] Generating page [1314]
    [120509_133712193][][STATEMENT] Phase2 time used: 3ms
    [120509_133712193][][STATEMENT] Total time used: 41269ms for processing XSL-FO
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Helvetica closed.
    [120509_133712846][oracle.apps.xdo.common.font.FontFactory][STATEMENT] type1.Times-Roman closed.
    [120509_133712848][][PROCEDURE] FO+Gen time used: 41924 msecs
    [120509_133712848][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) is called.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] clearInputs(Object) done. All inputs are cleared.
    [120509_133712850][oracle.apps.xdo.template.FOProcessor][STATEMENT] End Memory: max=496MB, total=496MB, free=121MB
    [120509_133818606][][EXCEPTION] java.net.SocketException: Socket closed
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
    at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
    at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
    at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
    at weblogic.servlet.internal.ChunkOutput.write(ChunkOutput.java:304)
    at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
    at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
    at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
    at oracle.apps.xdo.servlet.util.IOUtil.readWrite(IOUtil.java:47)
    at oracle.apps.xdo.servlet.CoreProcessor.process(CoreProcessor.java:280)
    at oracle.apps.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:82)
    at oracle.apps.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:562)
    at oracle.apps.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:265)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:270)
    at oracle.apps.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:250)
    at oracle.apps.xdo.servlet.XDOServlet.doGet(XDOServlet.java:178)
    at oracle.apps.xdo.servlet.XDOServlet.doPost(XDOServlet.java:201)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at oracle.apps.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:97)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3496)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    It seems where the querry processing is taking some time we are facing this issue.Do i need to perform any additional configuration to generate such reports?

    java.net.SocketException: Socket closed
         at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
         at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
         at weblogic.servlet.internal.ChunkOutput.writeChunkTransfer(ChunkOutput.java:525)
         at weblogic.servlet.internal.ChunkOutput.writeChunks(ChunkOutput.java:504)
         at weblogic.servlet.internal.ChunkOutput.flush(ChunkOutput.java:382)
         at weblogic.servlet.internal.CharsetChunkOutput.flush(CharsetChunkOutput.java:249)
         at weblogic.servlet.internal.ChunkOutput.checkForFlush(ChunkOutput.java:469)
         at weblogic.servlet.internal.CharsetChunkOutput.implWrite(CharsetChunkOutput.java:396)
         at weblogic.servlet.internal.CharsetChunkOutput.write(CharsetChunkOutput.java:198)
         at weblogic.servlet.internal.ChunkOutputWrapper.write(ChunkOutputWrapper.java:139)
         at weblogic.servlet.internal.ServletOutputStreamImpl.write(ServletOutputStreamImpl.java:169)
         at com.tej.systemi.util.AroundData.copyStream(AroundData.java:311)
         at com.tej.systemi.client.servlet.servant.Newdownloadsingle.producePageData(Newdownloadsingle.java:108)
         at com.tej.systemi.client.servlet.servant.BaseViewController.serve(BaseViewController.java:542)
         at com.tej.systemi.client.servlet.FrontController.doRequest(FrontController.java:226)
         at com.tej.systemi.client.servlet.FrontController.doPost(FrontController.java:128)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:292)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3498)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(Unknown Source)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2180)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2086)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1406)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:17
    (Please help finding a solution in this issue its in production and we need to ASAP)
    Thanks in Advance
    Edited by: 909601 on Jan 23, 2012 2:05 AM

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Regarding TXT File data truncation due to large amount of data

    Hi Guys,
    I am downloading data to txt.file in background.I am getting truncation of the records due to large amount of data. If it is less data it works good.
    I have checked the Internal table SIZE for this and anywhy i have declared in OCCURS 0 only.
    So please help me to find out what may this reason.I am confuced is there any limitation for TXT file??
    Please help me guys..
    Thanks in advance..
    Prabhu.R

    Hi Rakesh,
    two ways.
    1. Ask ur BASIS team to increase the memory level.
    2. Check the PACKAGE SIZE option of select statement
    Here u  won't select all the data once but in packets of specified size. So get the packets of data and process.
    Just press F1 on package size. That explanation will be enough to proceed further.
    Thanks,
    Vinod.

  • Best way to pass large amounts of data to subroutines?

    I'm writing a program with a large amount of data, around 900 variables.  What is the best way for me to pass parts of this data to different subroutines?  I have a main loop on a PXI RT Controller that is controlling hydraulic cylinders and the loop needs to be 1ms or better.  How on earth should I pass 900 variables through a loop and keep it at 1ms?  One large cluster??  Several smaller clusters??  Local Variables?  Global Variables??  Help me please!!!

    My suggestion, similar to Altenbach and Matt above, is to use a Functional Global Variable (FGV) and use a 1D array of 900 values to store the data in the FGV. You can retrieve individual data items from the FGV by passing in the index of the desired variable and the FGV returns the value from the array. Instead of passing in an index you could also use a TypeDef'd Enum with all of your variables as element of the Enum, which will allow you to place the Enum constant on the diagram and make selecting variables, as well as reading the diagram, simpler.
    My group is developing a LabVIEW component/example code with this functionality that we plan to publish on DevZone in a month or two.
    The attached RTF file shows the core piece of this implementation. This VI off course is non-reentrant. The Init case could be changed to allocate the internal 1D array as part of this VI rather than passing it from another VI.
    Message Edited by Christian L on 01-31-2007 12:00 PM
    Christian Loew, CLA
    Principal Systems Engineer, National Instruments
    Please tip your answer providers with kudos.
    Any attached Code is provided As Is. It has not been tested or validated as a product, for use in a deployed application or system,
    or for use in hazardous environments. You assume all risks for use of the Code and use of the Code is subject
    to the Sample Code License Terms which can be found at: http://ni.com/samplecodelicense
    Attachments:
    CVT_Double_MemBlock.rtf ‏309 KB

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Open Large amount of data

    Hi
    I have a file on application server in .dat format, it contains large amount of data may be 2 million of records or  more, I need to open the file to check the record count, is there any software or any option to open the file, I have tried opening with Notepad, excel .... it gives error..
    please let me know
    Thanks

    Hi,
    Try this..
    Go to AL11..
    Go to the file directory..Then in the file there will be field called length..which is the total length of the file in characters..
    If you know the length of a single line..
    Divide the length of the file by the length of single line..I believe you will get the number of records..
    Thanks,
    Naren

  • Bex Report Designer - Large amount of data issue

    Hi Experts,
    I am trying to execute (on Portal) report made in BEx Report Designer, with about 30 000 pages, and the only thing I am getting is a blank page. Everything works fine at about 3000 pages. Do I need to set something to allow processing such large amount of data?
    Regards
    Vladimir

    Hi Sauro,
    I have not seen this behavior, but it has been a while since I tried to send an input schedule that large. I think the last time was on a BPC NW 7.0 SP06 system and it worked OK. If you are on a recent support package, then you should search for relevant notes (none come to mind for me, but searching yourself is always a good idea) and if you don't find one then you should open a support message with SAP, with very specific instructions for recreating the problem from a clean input-schedule.
    Good luck,
    Ethan

  • Advice needed on how to keep large amounts of data

    Hi guys,
    Im not sure whats the best way is to make large amounts of data available to my android  app on the local device.
    For example records of food ingredients, in the 100's?
    I have read and successfully created .db's using this tutorial.
    http://help.adobe.com/en_US/AIR/1.5/devappsflex/WS5b3ccc516d4fbf351e63e3d118666ade46-7d49. html
    However to populate the database I use flash? So this kind of defeats the purpose of it. No point in me shifting a massive array of data from flash to a sql database, when I could access the data direct from the as3 array?
    So maybe I could create the .db with an external program? but then how would I include that .db in the apk file and then deploy it to users android device.
    Or maybe I create a as3 class with an xml object init and use that as a means of data storage?
    Any advice would be appreciated

    You can use any means you like to populate your SQLite database, including using external programs, (temporarily) embedding a text file with SQL statements, executing some SQL from AS3 code etc etc.
    Once you have populated your db, deploy it with your project:
    http://chrisgriffith.wordpress.com/2011/01/11/understanding-bundled-sqlite-databases-in-ai r-for-mobile/
    Cheers, - Jon -

  • How can I stabilise large transfers of data & retain full smart playlists?

    Hi,
    I hope somebody can help. Now that I have more than 20,000 songs in my iTunes, all of which I want to sync, my ipod classic 160gb goes into the 'infinite reset loop', or, when occasionally appearing to sync correctly, ends up with 0 songs, 0 video, etc. on the iPod.
    This is despite supposedly having some 22GB of unused space.
    I assumed it was by 2-and-a-half-year-old iPod succumbing to old age and bought a replacement, but the problem occurs on the new machine too.
    I have tried various solutions from the Discussion forum, e.g.:
    1) procedures for checking for hard-drive errors
    2) restoring to factory settings and then re-syncing
    3) I found one helpful tip for creating a new smart playlist to introduce the files in stages, which is supposed stabilise the transfer of large amounts of data. While this was helpful, and appeared to stabilise the data, I was left without all my smart playlists (except the one I'd created for this exercise). Transferring back to Syncing the entire library, so as to regain all my smart playlists, merely made the iPod unstable again.
    I really want to be able to access my (extensive) smart playlists: I'm presuming it's the very amount of these that is contributing to the problem. Can anyone suggest a solution?
    Thanks!

    jcannon wrote:
    I am using my notebook whch only has 2GB RAm... I might need to try the workstation PC which has 8GB..
    I did read all the data using the read from text file vi and displayed it all in a strinbg indicator (reading in x number of bytes/lines at a time)... How can I no plot this text/string data?
    8GB won't really help you unless you also use LabVIEW 64bit. 2GB should be OK.
    To graph it, you need to decimate it as shown in the link I gave you earlier. Most likely, your monitor has only a few thousand pixels across, so it is not worth to display more points than that. It won't look any different if done right!
    Can you show us your code? Can you attach a small sample file that has the same structure, just with less data?
    LabVIEW Champion . Do more with less code and in less time .

  • Problem while having a large set of data to work on!

    Hi,
    I am facing great problem with processing large set of data. I have a requirement in which i'm supposed to generate a report.
    I have a table and a MView, which i have joined to reduce the number of records to process. The MView holds 200,00,000 records while the table 18,00,000. Based on join conditions and where clause i'm able to break down the useful data to approx 4,50,000 and i'm getting 8 of my report columns from this join. I'm dumping these records into the table from where i'll be generating the report by spooling.
    Below is the block which takes 12mins to insert into the report table MY_ACCOUNT_PHOTON_DUMP:
    begin
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    insert into MY_ACCOUNT_PHOTON_DUMP --- Report table
    (SUBSCR_NO, ACCOUNT_NO, AREA_CODE, DEL_NO, CIRCLE, REGISTRATION_DT, EMAIL_ID, ALT_CNTCT_NO)
    select crm.SUBSCR_NO, crm.ACCOUNT_NO, crm.AREA_CODE, crm.DEL_NO, crm.CIRCLE_ID,
    aa.CREATED_DATE, aa.EMAIL_ID, aa.ALTERNATE_CONTACT
    from MV_CRM_SUBS_DTLS crm, --- MView
    (select /*+ ALL_ROWS */ A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
    from MCCI_PROFILE_DTLS a, MCCI_PROFILE_SUBSCR_DTLS b
    where A.PROFILE_ID = B.PROFILE_ID
    and B.ACE_STATUS = 'N'
    ) aa --- Join of two tables giviing me 18,00,000 recs
    where crm.SUBSCR_NO = aa.SUBSCR_NO
    and crm.SRVC_TYPE_ID = '125'
    and crm.END_DT IS NULL;
    INTERNET_METER_TABLE_PROC_1('MCCIPRD','MY_ACCOUNT_PHOTON_DUMP'); --- calling procedure to analyze the report table
    COMMIT;
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    end; --- 12 min 04 secFor the rest of the 13 columns required i am running a block which has a FOR UPDATE cursor on the report table:
    declare
    cursor cur is
    select SUBSCR_NO, ACCOUNT_NO, AREA_CODE, DEL_NO,
    CIRCLE, REGISTRATION_DT, EMAIL_ID, ALT_CNTCT_NO
    from MCCIPRD.MY_ACCOUNT_PHOTON_DUMP --where ACCOUNT_NO = 901237064
    for update of
    MRKT_SEGMNT, AON, ONLINE_PAY, PAID_AMNT, E_BILL, ECS, BILLED_AMNT,
    SRVC_TAX, BILL_PLAN, USAGE_IN_MB, USAGE_IN_MIN, NO_OF_LOGIN, PHOTON_TYPE;
    v_aon VARCHAR2(10) := NULL;
    v_online_pay VARCHAR2(10) := NULL;
    v_ebill VARCHAR2(10) := NULL;
    v_mkt_sgmnt VARCHAR2(50) := NULL;
    v_phtn_type VARCHAR2(50) := NULL;
    v_login NUMBER(10) := 0;
    v_paid_amnt VARCHAR2(50) := NULL;
    v_ecs VARCHAR2(10) := NULL;
    v_bill_plan VARCHAR2(100):= NULL;
    v_billed_amnt VARCHAR2(10) := NULL;
    v_srvc_tx_amnt VARCHAR2(10) := NULL;
    v_usg_mb NUMBER(10) := NULL;
    v_usg_min NUMBER(10) := NULL;
    begin
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    for rec in cur loop
    begin
    select apps.TTL_GET_DEL_AON@MCCI_TO_PRD591(rec.ACCOUNT_NO, rec.DEL_NO, rec.CIRCLE)
    into v_aon from dual;
    exception
    when others then
    v_aon := 'NA';
    end;
    SELECT DECODE(COUNT(*),0,'NO','YES') into v_online_pay
    FROM TTL_DESCRIPTIONS@MCCI_TO_PRD591
    WHERE DESCRIPTION_CODE IN(SELECT DESCRIPTION_CODE FROM TTL_BMF_TRANS_DESCR@MCCI_TO_PRD591
    WHERE BMF_TRANS_TYPE
    IN (SELECT BMF_TRANS_TYPE FROM
    TTL_BMF@MCCI_TO_PRD591 WHERE ACCOUNT_NO = rec.ACCOUNT_NO
    AND POST_DATE BETWEEN
    TO_DATE('01-'||TO_CHAR(SYSDATE,'MM-YYYY'),'DD-MM-YYYY') AND SYSDATE
    AND DESCRIPTION_TEXT IN (select DESCRIPTION from fnd_lookup_values@MCCI_TO_PRD591 where
    LOOKUP_TYPE='TTL_ONLINE_PAYMENT');
    SELECT decode(count( *),0,'NO','YES') into v_ebill
    FROM TTL_CUST_ADD_DTLS@MCCI_TO_PRD591
    WHERE CUST_ACCT_NBR = rec.ACCOUNT_NO
    AND UPPER(CUSTOMER_PREF_MODE) ='EMAIL';
    begin
    select ACC_SUB_CAT_DESC into v_mkt_sgmnt
    from ttl_cust_dtls@MCCI_TO_PRD591 a, TTL_ACCOUNT_CATEGORIES@MCCI_TO_PRD591 b
    where a.CUST_ACCT_NBR = rec.ACCOUNT_NO
    and a.market_code = b.ACC_SUB_CAT;
    exception
    when others then
    v_mkt_sgmnt := 'NA';
    end;
    begin
    select nvl(sum(TRANS_AMOUNT),0) into v_paid_amnt
    from ttl_bmf@MCCI_TO_PRD591
    where account_no = rec.ACCOUNT_NO
    AND POST_DATE
    BETWEEN TO_DATE('01-'||TO_CHAR(SYSDATE,'MM-YYYY'),'DD-MM-YYYY')
    AND SYSDATE;
    exception
    when others then
    v_paid_amnt := 'NA';
    end;
    SELECT decode(count(1),0,'NO','YES') into v_ecs
    from ts.Billdesk_Registration_MV@MCCI_TO_PRD591 where ACCOUNT_NO = rec.ACCOUNT_NO
    and UPPER(REGISTRATION_TYPE ) = 'ECS';
    SELECT decode(COUNT(*),0,'PHOTON WHIZ','PHOTON PLUS') into v_phtn_type
    FROM ts.ttl_cust_ord_prdt_dtls@MCCI_TO_PRD591 A, ttl_product_mstr@MCCI_TO_PRD591 b
    WHERE A.SUBSCRIBER_NBR = rec.SUBSCR_NO
    and (A.prdt_disconnection_date IS NULL OR A.prdt_disconnection_date > SYSDATE )
    AND A.prdt_disc_flag = 'N'
    AND A.prdt_nbr = b.product_number
    AND A.prdt_type_id = b.prouduct_type_id
    AND b.first_level LIKE 'Feature%'
    AND UPPER (b.product_desc) LIKE '%HSIA%';
    SELECT count(1) into v_login
    FROM MCCIPRD.MYACCOUNT_SESSION_INFO a
    WHERE (A.DEL_NO = rec.DEL_NO or A.DEL_NO = ltrim(rec.AREA_CODE,'0')||rec.DEL_NO)
    AND to_char(A.LOGIN_TIME,'Mon-YYYY') = to_char(sysdate-5,'Mon-YYYY');
    begin
    select PACKAGE_NAME, BILLED_AMOUNT, SERVICE_TAX_AMOUNT, USAGE_IN_MB, USAGE_IN_MIN
    into v_bill_plan, v_billed_amnt, v_srvc_tx_amnt, v_usg_mb, v_usg_min from
    (select rank() over(order by STATEMENT_DATE desc) rk,
    PACKAGE_NAME, USAGE_IN_MB, USAGE_IN_MIN
    nvl(BILLED_AMOUNT,'0') BILLED_AMOUNT, NVL(SRVC_TAX_AMNT,'0') SERVICE_TAX_AMOUNT
    from MCCIPRD.MCCI_IM_BILLED_DATA
    where (DEL_NUM = rec.DEL_NO or DEL_NUM = ltrim(rec.AREA_CODE,'0')||rec.DEL_NO)
    and STATEMENT_DATE like '%'||to_char(SYSDATE,'Mon-YY')||'%')
    where rk = 1;
    exception
    when others then
    v_bill_plan := 'NA';
    v_billed_amnt := '0';
    v_srvc_tx_amnt := '0';
    v_usg_mb := 0;
    v_usg_min := 0;
    end;
    -- UPDATE THE DUMP TABLE --
    update MCCIPRD.MY_ACCOUNT_PHOTON_DUMP
    set MRKT_SEGMNT = v_mkt_sgmnt, AON = v_aon, ONLINE_PAY = v_online_pay, PAID_AMNT = v_paid_amnt,
    E_BILL = v_ebill, ECS = v_ecs, BILLED_AMNT = v_billed_amnt, SRVC_TAX = v_srvc_tx_amnt,
    BILL_PLAN = v_bill_plan, USAGE_IN_MB = v_usg_mb, USAGE_IN_MIN = v_usg_min, NO_OF_LOGIN = v_login,
    PHOTON_TYPE = v_phtn_type
    where current of cur;
    end loop;
    COMMIT;
    dbms_output.put_line(to_char(sysdate,'hh24:mi:ss'));
    exception when others then
    dbms_output.put_line(SQLCODE||'::'||SQLERRM);
    end;The report takes >6hrs. I know that most of the SELECT queries have ACCOUNT_NO as WHERE clause and can be joined, but when i joining few of these blocks with the initial INSERT query it was no better.
    The individual queries within the cursor loop dont take more then 0.3 sec to execute.
    I'm using the FOR UPDATE as i know that the report table is being used solely for this purpose.
    Can somebody plz help me with this, i'm in desperate need of good advice here.
    Thanks!!
    Edited by: user11089213 on Aug 30, 2011 12:01 AM

    Hi,
    Below is the explain plan for the original query:
    select /*+ ALL_ROWS */  crm.SUBSCR_NO, crm.ACCOUNT_NO, ltrim(crm.AREA_CODE,'0'), crm.DEL_NO, >crm.CIRCLE_ID
    from MV_CRM_SUBS_DTLS crm,
            (select /*+ ALL_ROWS */  A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
            from MCCIPRD.MCCI_PROFILE_DTLS a, MCCIPRD.MCCI_PROFILE_SUBSCR_DTLS b
            where A.PROFILE_ID = B.PROFILE_ID
            and   B.ACE_STATUS = 'N'
            ) aa
    where crm.SUBSCR_NO    = aa.SUBSCR_NO
    and   crm.SRVC_TYPE_ID = '125'
    and   crm.END_DT IS NULL
    | Id  | Operation              | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                          |  1481K|   100M|       |   245K  (5)| 00:49:09 |
    |*  1 |  HASH JOIN             |                          |  1481K|   100M|    46M|   245K  (5)| 00:49:09 |
    |*  2 |   HASH JOIN            |                          |  1480K|    29M|    38M| 13884   (9)| 00:02:47 |
    |*  3 |    TABLE ACCESS FULL   | MCCI_PROFILE_SUBSCR_DTLS |  1480K|    21M|       |  3383  (13)| 00:00:41 |
    |   4 |    INDEX FAST FULL SCAN| SYS_C002680              |  2513K|    14M|       |  6024   (5)| 00:01:13 |
    |*  5 |   MAT_VIEW ACCESS FULL | MV_CRM_SUBS_DTLS_08AUG   |  1740K|    82M|       |   224K  (5)| 00:44:49 |
    Predicate Information (identified by operation id):
       1 - access("CRM"."SUBSCR_NO"="B"."SUBSCR_NO")
       2 - access("A"."PROFILE_ID"="B"."PROFILE_ID")
       3 - filter("B"."ACE_STATUS"='N')
       5 - filter("CRM"."END_DT" IS NULL AND "CRM"."SRVC_TYPE_ID"='125')Whereas for the modified MView query, the plane remains the same:
    select /*+ ALL_ROWS */ crm.SUBSCR_NO, crm.ACCOUNT_NO, ltrim(crm.AREA_CODE,'0'), crm.DEL_NO, >crm.CIRCLE_ID
    from    (select * from MV_CRM_SUBS_DTLS
             where SRVC_TYPE_ID = '125'
             and   END_DT IS NULL) crm,
            (select /*+ ALL_ROWS */  A.ALTERNATE_CONTACT, A.CREATED_DATE, A.EMAIL_ID, B.SUBSCR_NO
            from MCCIPRD.MCCI_PROFILE_DTLS a, MCCIPRD.MCCI_PROFILE_SUBSCR_DTLS b
            where A.PROFILE_ID = B.PROFILE_ID
            and   B.ACE_STATUS = 'N'
            ) aa
    where crm.SUBSCR_NO  = aa.SUBSCR_NO
    | Id  | Operation              | Name                     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT       |                          |  1481K|   100M|       |   245K  (5)| 00:49:09 |
    |*  1 |  HASH JOIN             |                          |  1481K|   100M|    46M|   245K  (5)| 00:49:09 |
    |*  2 |   HASH JOIN            |                          |  1480K|    29M|    38M| 13884   (9)| 00:02:47 |
    |*  3 |    TABLE ACCESS FULL   | MCCI_PROFILE_SUBSCR_DTLS |  1480K|    21M|       |  3383  (13)| 00:00:41 |
    |   4 |    INDEX FAST FULL SCAN| SYS_C002680              |  2513K|    14M|       |  6024   (5)| 00:01:13 |
    |*  5 |   MAT_VIEW ACCESS FULL | MV_CRM_SUBS_DTLS_08AUG   |  1740K|    82M|       |   224K  (5)| 00:44:49 |
    Predicate Information (identified by operation id):
       1 - access("CRM"."SUBSCR_NO"="B"."SUBSCR_NO")
       2 - access("A"."PROFILE_ID"="B"."PROFILE_ID")
       3 - filter("B"."ACE_STATUS"='N')
       5 - filter("CRM"."END_DT" IS NULL AND "CRM"."SRVC_TYPE_ID"='125')Also took your advice and tried to merge all the queries into single INSERT SQL, will be posting the results shortly.
    Edited by: BluShadow on 30-Aug-2011 10:21
    added {noformat}{noformat} tags.  Please read {message:id=9360002} to learn to do this yourself                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?

    With journaling, I have found that my computer is saving a large amount of data, logs of all the changes I make to files; how can I clean up these logs?
    For example, in Notes, I have written three notes; however if I click on 'All On My Mac' on the side bar, I see about 10 different versions of each note I make, it saves a version every time I add or delete a sentence.
    I also noticed, that when I write an email, Mail saves about 10 or more draft versions before the final is sent.
    I understand that all this journaling provides a level of security, and prevents data lost; but I was wondering, is there a function to clean up journal logs once in a while?
    Thanks
    Roz

    Are you using Microsoft word?  Microsoft thinks the users are idiots. They put up a lot of pointless messages that annoy & worry users.  I have seen this message from Microsoft word.  It's annoying.
    As BDaqua points out...
    When you copy information via edit > copy,  command + c, edit > cut, or command +x, you place the information on the clipboard. When you paste information, edit > paste or command + v, you copy information from the clipboard to your data file.
    If you edit > cut or command + x and you do not paste the information and you quite Word, you could be loosing information.  Microsoft is very worried about this. When you quite Word, Microsoft checks if there is information on the clipboard & if so, Microsoft puts out this message.
    You should be saving your work more than once a day. I'd save every 5 minutes.  command + s does a save.
    Robert

Maybe you are looking for

  • Conditions in case statement

    Hello : Is it possible to make something like this conditions in a Case statment?: Case mycondition of (mycondition >= 10 and mycondition <= 50 ): --do this etc... end Case I know that can use If conditional but is it possible of the above? Thanks in

  • Private iTunes U: Upload error

    Attempting to upload content to my private ITunes U and am getting this error: Error: Upload Failed. Try again.  Users are still able to download content.  To clarify, even as the iTunes U Admin I am unable to upload content.  On my public site, I am

  • CS Design & Web Premium Upgrade from CS5.5 to CS6 on DVD

    I would like to order a CS Design & Web Premium Upgrade from CS5.5 to CS6 (int. english, MAC) in the Adobe Shop. I have a really low internet connection and can not download 4+GB, it does not work. Even files over 20MB are pure horror. So why I can n

  • Native Tooltip in Dreamweaver CC?

    I don't see it under the jQuary UI widgets. I'm hoping that it's in there but I'm just missing it. Otherwise, it looks like I will have to download and insert manually from jQuery. I can't afford any of the commmercial exttensionsn at the moment. Doe

  • 9iAS and PL/SQL

    Hello, We are building an application based on EJBs and plan to deploy them to Oracle's 9iAS. In order to support some legacy applications, the EJBs need be to be accessible to PL/SQL based applications. I have tried to deploy a Java Stored Procedure