Geo Images Data for testing GeoRasterLoader

Oracle 10g GeoRaster provide a sample application to
loader and view Geo Image. I test it with a geo image dataset of .jpg and jgw, GeoRaster Loader Reports warning, and GeoRaster Viewer could not view it.
I'm wonder, is this the fault of geo image dataset? so where could I get some geo images dataset to test GeoRasterLoader or GeoRasterViewer
Thanx in advanced
Richard lee

Hi,
There seems to be downloadable data at the following sites - note that Oracle has no relationship with the entities that provide this data, and there may be registration or a fee required.
http://www.remotesensing.org/
http://www.gisdatadepot.com/
http://www.state.ma.us/mgis/

Similar Messages

  • Issue with creating bitmap image data for Format8bppIndexed and Format4bppIndexed

    We are using below method to convert byte array into bitmap . we have successfully converted 16,24,32 etc. Pixel formats but we are facing issue while converting 4 and 8 pixel
    formats  image is rendering in blur format and image starting and ending positions are changed.
    <summary>
    /// Converting the raw data into bitmap
    </summary>
    <param name="buffer">Byte array of the image rawdata</param>
    <param name="nWidth">Image width</param>
    <param name="nHeight">Image height</param>
    <param name="nBitCount">Image Pixel format</param>
    <returns></returns>
    internal
    Bitmap ConvertRawDataToBitMap(byte[]
    buffer, int nWidth,
    int nHeight,
    int nBitCount,PDIB
    pDIB=null)
    Size imageSize =
    new
    Size(nWidth, nHeight);
    PixelFormat imagePixelFormat = GetPixelFormat(nBitCount);
    Bitmap bitmap =
    new
    Bitmap(imageSize.Width, imageSize.Height, imagePixelFormat);
    Rectangle wholeBitmap =
    new
    Rectangle(0, 0, bitmap.Width, bitmap.Height);
    BitmapData bitmapData = bitmap.LockBits(wholeBitmap,
    ImageLockMode.WriteOnly, imagePixelFormat);
    //Marshal.Copy(buffer, 0, bitmapData.Scan0, buffer.Length);
    Marshal.Copy(buffer, 0, bitmapData.Scan0, bitmapData.Stride * bitmap.Height);
    bitmap.UnlockBits(bitmapData);
    bitmap.RotateFlip(RotateFlipType.Rotate180FlipX);
    return bitmap;
    <summary>
    /// Returns the pixel format from given bit count.
    </summary>
    <param name="nPixelBitCount">Pixel bit count example 4 or 8 or 16 or 24 ..etc</param>
    <returns></returns>
    private
    PixelFormat GetPixelFormat(int
    nPixelBitCount)
    PixelFormat pixelFormat =
    PixelFormat.Undefined;
    switch (nPixelBitCount)
    case 4:
    pixelFormat =
    PixelFormat.Format4bppIndexed;
    break;
    case 8:
    pixelFormat =
    PixelFormat.Format8bppIndexed;
    break;
    case 16:
    pixelFormat =
    PixelFormat.Format16bppRgb555;
    break;
    case 24:
    pixelFormat =
    PixelFormat.Format24bppRgb;
    break;
    case 32:
    pixelFormat =
    PixelFormat.Format32bppRgb;
    break;
    case 48:
    pixelFormat =
    PixelFormat.Format48bppRgb;
    break;
    case 64:
    pixelFormat =
    PixelFormat.Format64bppArgb;
    break;
    default:
    pixelFormat =
    PixelFormat.Undefined;
    break;
    return pixelFormat;
    below is the converted image for 8 pixelformat
    below is the Actual image
    Please help me to find the solution.
    Thanks in Advance,
     Madhava Reddy and Madhu

    Hi ,
    We got below errors in the screen shots inserted.
    MemoryStream msEVPict1 = new MemoryStream(buffer);
    EVPict.pDIB.ImageBitmap = new Bitmap(msEVPict1);
     MemoryStream
    ms1 = new
    MemoryStream(buffer);
    System.Drawing.Image
    img = Image
    .FromStream(ms1);
    Thanks,
    Madhu & Madhav

  • Encrypting data for testing - but keeping it legible and the right length

    Hello,
    I currently work with a client where one of our final stages of testing is done against test databases (9.2), maintained by the client, but holding real data, taken as a snapshot. This gives us volumes of data similar to what will be encountered in production.
    The client now has a (valid) concern that our people testing the app therefore get to see 'real', if slightly out-of-date, data, particularly names and addresses, and have asked us if there's some way of hiding this from the testers, who have access to the test db both through a front-end mainly built on Oracle Forms, but also e.g. querying directly with SQL Plus.
    I can't just give all the parties stored in the system the same name or address, so what I think I'd like to do is to replace the sensitive data with encrypted versions, possibly using the DBMS_OBFUSCATION_TOOLKIT. A problem with that, though, is that the encrypted data could contain any characters and may cause problems displaying in the front-end, certainly where testers have to e.g. type in a name to search on.
    I could use rawtohex to convert the data into an easily read / typed format, but then the results could be too long for the existing columns and the fields in the front-ends.
    Has anyone encountered a similar problem and come up with a solution?
    Thanks in advance,
    James

    There are a variety of tools out there sold by companies to automate this sort of thing.
    One relatively common, and relatively simple, approach would be to run a process that mixes and matches data from different records. That is, you take the first name from record 1, the last name from record 2, address from record 3... Of course, you'd have to figure out how sophisticated a jumbling algorithm you need depending on how sensitive the data and how performant an algorithm you need depending on data volumes.
    Relatively efficient, but relatively simple to reverse, would be to use something like this to populate the tables
    insert into emp( ename, sal, comm )
    select ename,
          lag(sal) over( order by empno ),
          lag(comm,2) over (order by empno )
      from emp@prod_db_linkJustin

  • Tables's data scrambled (randomized) for testing purposes

    Dear Oracle People,
    I need to scramble (randomize) the date for testing purposes
    How I can do that within Oracle
    Source:
    CODE          FIRST_NAME     LAST_NAME
    1          FN_1               LN_1
    2          FN_2               LN_2
    3          FN_3               LN_3
    Target after scrambling:
    CODE          FIRST_NAME     LAST_NAME
    1          FN_2               LN_3
    2          FN_3               LN_1
    3          FN_1               LN_2

    Source table: t1
    Target table: t2 (create table t2 as select * from t1)
    Something like this would do:
    DECLARE
    rc NUMBER;
    BEGIN
    SELECT COUNT (*) INTO rc FROM t1;
    FOR i IN (SELECT last_name FROM t1)
    LOOP
    UPDATE t2
    SET last_name = i.last_name
    WHERE code = ROUND (DBMS_RANDOM.VALUE (1, rc));
    END LOOP;
    END;

  • Can we Use GGate to multiple data for performance testing?

    Hi,
    My team has a requirement were we would like to multiply data 2x, 4x, 8x time so that we hava a good amount of data for testing in DB. We have a Master data created and using the reference to that master data we wanna do the multiplication.

    Say I have 10 records in DB.
    Now Can I increase the data to 20, then to 40 and So on by taking this 10 records as master and changing their Ids/other details that needs to be unique.
    And ya that should be in the same DATABASE (DB1 ---- 2X---> DB1)
    Edited by: Lother on 23-Jan-2013 23:36

  • How to create or generate sample / test data (mass data for tables) ?

    Hello,
    I'm playing around a little with some SQL-functions, but actually I have only a small number of rows in my sample table and I would like to have "big, filled tables". :-)
    Is there an easy way to generate mass data for tables, e.g. for testing the performance of SQL-statements when a table is full of data.
    For example:
    How can I generate lets say 50.000 or 100.000 rows into a table with two columns? Is there a ready-to-use command to generate this mass of data?
    How do you create random data for testing?
    Thanks a lot in advance for your help!
    Best regards
    FireFighter

    First, thanks for the quick and great answer! It looks exactly what I'm looking for. How could I forget to look at Tom's site for such a script. ;-)
    But.....
    ...unfortunately, it doesn't seem to work. :-( And since I'm not so experienced in PL/SQL until now (looking forward to a course end of year...) I don't know what the error is. Is it not meant to use within 10g ??
    So, here is what i do:
    1. Log in to SQL*plus and generate the procedure with "@gen_data.sql"
    2. Then I try to execute the procedure by using "exec gen_data('mytesttable',500);"
    Then I get the following error output:
    SQL*plus> exec gen_data( 'mytable', 50 );
    BEGIN gen_data( 'mytable', 50 ); END;
    ERROR at line 1:
    ORA-00936: missing expression
    ORA-06512: at "SYS.GEN_DATA", line 34
    ORA-06512: at line 1
    And here is the code that I have used:
    01 create or replace procedure gen_data( p_tname in varchar2, p_records in number )
    02 authid current_user
    03 as
    04 l_insert long;
    05 l_rows number default 0;
    06 begin
    07
    08 dbms_application_info.set_client_info( 'gen_data ' || p_tname );
    09 l_insert := 'insert /*+ append */ into ' || p_tname ||
    10 ' select ';
    11
    12 for x in ( select data_type, data_length,
    13 nvl(rpad( '9',data_precision,'9')/power(10,data_scale),9999999999) maxval
    14 from user_tab_columns
    15 where table_name = upper(p_tname)
    16 order by column_id )
    17 loop
    18 if ( x.data_type in ('NUMBER', 'FLOAT' ))
    19 then
    20 l_insert := l_insert || 'dbms_random.value(1,' || x.maxval || '),';
    21 elsif ( x.data_type = 'DATE' )
    22 then
    23 l_insert := l_insert ||
    24 'sysdate+dbms_random.value+dbms_random.value(1,1000),';
    25 else
    26 l_insert := l_insert || 'dbms_random.string(''A'',' ||
    27 x.data_length || '),';
    28 end if;
    29 end loop;
    30 l_insert := rtrim(l_insert,',') ||
    31 ' from all_objects where rownum <= :n';
    32
    33 loop
    34 execute immediate l_insert using p_records - l_rows;
    35 l_rows := l_rows + sql%rowcount;
    36 commit;
    37 dbms_application_info.set_module
    38 ( l_rows || ' rows of ' || p_records, '' );
    39 exit when ( l_rows >= p_records );
    40 end loop;
    41 end;
    42 /
    Does anybody know what my error i have in here?
    Thanks again for you help in advance!
    Rgds
    FF
    Message was edited by:
    FireFighter

  • Creating a folder for testing

    Hi all,
    This is a suggestion rather than a question.
    I sometimes need a folder with a nice small well defined set of data for testing.
    Rather than have to go to the DA and ask them to add a table to the Oracle database and then add it to the Business Area I found a simpler solution:
    In Discoverer Administrator Create a Custom Folder.
    Add SQL something like:
    SELECT 'A' AS Type, 1 AS Days, 10 AS DailyCost FROM DUAL
    UNION SELECT 'A',2,20 FROM DUAL
    UNION SELECT 'B',3,50 FROM DUAL
    UNION SELECT 'B',4,100 FROM DUAL
    This will provide a table with three fields: Type, Days & DailyCost.
    And four records.
    Save this and give it a sensible name such as TestTable and it's then available
    for running tests.
    It's not too complex to create a series of Excel formulae which will take a table of values and create the SQL to be used to create a custom folder.
    This is a fairly quick (if hackish) way of getting a table of values into the business area until such time as it can be put into the database by the DA.
    In Excel
    For a table with field names in cells A1..C1 and records in A2..C5 use the following formulae in Cells E2..E5 and then copy these four cells to get the SQL!
    ="SELECT "&IF(ISTEXT(A2),"'"&A2&"'",A2&"")&" AS "&A1&","&IF(ISTEXT(B2),"'"&B2&"'",B2&"")&" AS "&B1&","&IF(ISTEXT(C2),"'"&C2&"'",C2&"")&" AS "&C1& " FROM DUAL"
    ="UNION SELECT "&IF(ISTEXT(A3),"'"&A3&"'",A3&"")&","&IF(ISTEXT(B3),"'"&B3&"'",B3&"")&","&IF(ISTEXT(C3),"'"&C3&"'",C3&"")& " FROM DUAL"
    ="UNION SELECT "&IF(ISTEXT(A4),"'"&A4&"'",A4&"")&","&IF(ISTEXT(B4),"'"&B4&"'",B4&"")&","&IF(ISTEXT(C4),"'"&C4&"'",C4&"")& " FROM DUAL"
    ="UNION SELECT "&IF(ISTEXT(A5),"'"&A5&"'",A5&"")&","&IF(ISTEXT(B5),"'"&B5&"'",B5&"")&","&IF(ISTEXT(C5),"'"&C5&"'",C5&"")& " FROM DUAL"
    Yours nerdily
    Suhada

    Hi Suhada
    Like Rod, I'm a fellow nerd and am always on the lookout for cool ways to expand and work on data inside Discoverer.
    With your permission, I'd like to take this idea and make a posting on my blog (http://learndiscoverer.blogspot.com/). I'll give you credit for the idea but thought it might be nice if I could show it working with screenshots of Excel and inside both Discoverer Admin and Plus.
    If you don't object, please send me ([email protected]) the Excel spreadhsheet that you used for your test and I'll see if I can't get something up within the next week or so.
    Best wishes
    Michael

  • Dynamix CRM 2013 Database for test

    Hi,
    i 'm beginner in dynamic crm, i use now dynamic 2013 and i need a database (contains informations ) for test, thanks.
    Mark as answer or vote as helpful if you find it useful | Ammar Zaied [MCP]

    Hi,
    If you go Custom/data administrator/example data you can install data for test.
    Salu2 Atilin | http://www.dexrm.com

  • How to generate test data for all the tables in oracle

    I am planning to use plsql to generate the test data in all the tables in schema, schema name is given as input parameters, min records in master table, min records in child table. data should be consistent in the columns which are used for constraints i.e. using same column value..
    planning to implement something like
    execute sp_schema_data_gen (schemaname, minrecinmstrtbl, minrecsforchildtable);
    schemaname = owner,
    minrecinmstrtbl= minimum records to insert into each parent table,
    minrecsforchildtable = minimum records to enter into each child table of a each master table;
    all_tables where owner= schemaname;
    all_tab_columns and all_constrains - where owner =schemaname;
    using dbms_random pkg.
    is anyone have better idea to do this.. is this functionality already there in oracle db?

    Ah, damorgan, data, test data, metadata and table-driven processes. Love the stuff!
    There are two approaches you can take with this. I'll mention both and then ask which
    one you think you would find most useful for your requirements.
    One approach I would call the generic bottom-up approach which is the one I think you
    are referring to.
    This system is a generic test data generator. It isn't designed to generate data for any
    particular existing table or application but is the general case solution.
    Building on damorgan's advice define the basic hierarchy: table collection, tables, data; so start at the data level.
    1. Identify/document the data types that you need to support. Start small (NUMBER, VARCHAR2, DATE) and add as you go along
    2. For each data type identify the functionality and attributes that you need. For instance for VARCHAR2
    a. min length - the minimum length to generate
    b. max length - the maximum length
    c. prefix - a prefix for the generated data; e.g. for an address field you might want a 'add1' prefix
    d. suffix - a suffix for the generated data; see prefix
    e. whether to generate NULLs
    3. For NUMBER you will probably want at least precision and scale but might want minimum and maximum values or even min/max precision,
    min/max scale.
    4. store the attribute combinations in Oracle tables
    5. build functionality for each data type that can create the range and type of data that you need. These functions should take parameters that can be used to control the attributes and the amount of data generated.
    6. At the table level you will need business rules that control how the different columns of the table relate to each other. For example, for ADDRESS information your business rule might be that ADDRESS1, CITY, STATE, ZIP are required and ADDRESS2 is optional.
    7. Add table-level processes, driven by the saved metadata, that can generate data at the record level by leveraging the data type functionality you have built previously.
    8. Then add the metadata, business rules and functionality to control the TABLE-TO-TABLE relationships; that is, the data model. You need the same DETPNO values in the SCOTT.EMP table that exist in the SCOTT.DEPT table.
    The second approach I have used more often. I would it call the top-down approach and I use
    it when test data is needed for an existing system. The main use case here is to avoid
    having to copy production data to QA, TEST or DEV environments.
    QA people want to test with data that they are familiar with: names, companies, code values.
    I've found they aren't often fond of random character strings for names of things.
    The second approach I use for mature systems where there is already plenty of data to choose from.
    It involves selecting subsets of data from each of the existing tables and saving that data in a
    set of test tables. This data can then be used for regression testing and for automated unit testing of
    existing functionality and functionality that is being developed.
    QA can use data they are already familiar with and can test the application (GUI?) interface on that
    data to see if they get the expected changes.
    For each table to be tested (e.g. DEPT) I create two test system tables. A BEFORE table and an EXPECTED table.
    1. DEPT_TEST_BEFORE
         This table has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look BEFORE the
         test for that test case is performed.
         CREATE TABLE DEPT_TEST_BEFORE
         TESTCASE NUMBER,
         DEPTNO NUMBER(2),
         DNAME VARCHAR2(14 BYTE),
         LOC VARCHAR2(13 BYTE)
    2. DEPT_TEST_EXPECTED
         This table also has all EMP table columns and a TEST_CASE column.
         It holds EMP-image rows for each test case that show the row as it should look AFTER the
         test for that test case is performed.
    Each of these tables are a mirror image of the actual application table with one new column
    added that contains a value representing the TESTCASE_NUMBER.
    To create test case #3 identify or create the DEPT records you want to use for test case #3.
    Insert these records into DEPT_TEST_BEFORE:
         INSERT INTO DEPT_TEST_BEFORE
         SELECT 3, D.* FROM DEPT D where DEPNO = 20
    Insert records for test case #3 into DEPT_TEST_EXPECTED that show the rows as they should
    look after test #3 is run. For example, if test #3 creates one new record add all the
    records fro the BEFORE data set and add a new one for the new record.
    When you want to run TESTCASE_ONE the process is basically (ignore for this illustration that
    there is a foreign key betwee DEPT and EMP):
    1. delete the records from SCOTT.DEPT that correspond to test case #3 DEPT records.
              DELETE FROM DEPT
              WHERE DEPTNO IN (SELECT DEPTNO FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3);
    2. insert the test data set records for SCOTT.DEPT for test case #3.
              INSERT INTO DEPT
              SELECT DEPTNO, DNAME, LOC FROM DEPT_TEST_BEFORE WHERE TESTCASE = 3;
    3 perform the test.
    4. compare the actual results with the expected results.
         This is done by a function that compares the records in DEPT with the records
         in DEPT_TEST_EXPECTED for test #3.
         I usually store these results in yet another table or just report them out.
    5. Report out the differences.
    This second approach uses data the users (QA) are already familiar with, is scaleable and
    is easy to add new data that meets business requirements.
    It is also easy to automatically generate the necessary tables and test setup/breakdown
    using a table-driven metadata approach. Adding a new test table is as easy as calling
    a stored procedure; the procedure can generate the DDL or create the actual tables needed
    for the BEFORE and AFTER snapshots.
    The main disadvantage is that existing data will almost never cover the corner cases.
    But you can add data for these. By corner cases I mean data that defines the limits
    for a data type: a VARCHAR2(30) name field should have at least one test record that
    has a name that is 30 characters long.
    Which of these approaches makes the most sense for you?

  • Insufficient data for image

    I just received an update to version 10.1.4.  My previous version was 10.1.23.  After the upgrade , I started to receive an error message insufficient data for image.  This is the first time this error appeared so it has to be related to the new version.   I unistalled this version and installed version 9, the problem went away.  Is anyone else having this issue or is there a fix for this issue? 
    Thanks...

      At last I understand.
    Sorry I was so slow.
    By the time you see the "Insufficient data for an image" message it is too late.
    Exit Acrobat without saving anything. If you save the PDF it is worthless; reget the original.
    Note that when I tried this procedure just using the "Optimize Scanned PDF" tool instead of saving and working with an optimized PDF the procedure failed with the "Insufficient data for an image" message. As a result, though the following procedure seems to include some redundant steps they appear to be necessary in this context.
    From here on I'll assume you have added "Recognize text in this file" and "Manage Embedded Index" to your Quick Tools.
    Open the downloaded PDF
    File > Save As > Optimized PDF
         UNcheck "Optimize images only if there is a reduction in size"
         OK
         Save [at this point you may want to save it under another name]
              (for my PDF for my machine this took 5 minutes)
              (I got the following message:
                  Conversion warning: THe PDF document contained image masks that were not downsampled"
                   OK
    Select the "Recognize text in this file" quick tool
         On the Recognize Text panel
              Mark "all pages"
              "Primary OCR Language English (US);
              PDF Output Style Searchable Image
              Downsample To: 600 dpi"
              OK
              (for my PDF for my machine this took 15 minutes)
    Select the "Manage embedded index" quick tool
         On the Manage Embedded Index panel
              Select "Embed Index"
              "The PDF document needs to be saved before an index can be embedded. Do you want to save and continue?"
              Yes
              Status: Index has been embedded"
              OK
    Edit > Find > Hog [this edit test works] so this workaround did the job.
    Thanks again for your help.

  • Insufficient Data for an Image Acrobat Pro 11.0.6 - Windows 8.1

    We have recently upgraded several users in our office to Windows 8.1.  They are running Acrobat Pro 11.0.6, which they had previously been running Windows 7 without issue.
    Now, in Windows 8, they will have a PDF file open and pages in it start to go blank followed by the "insufficient data for an image" error.  It seems that this may be related to having a downloaded PDF file open in another window, or making comments in a PDF file. 
    Has anyone else run into this?  Any suggestions on what to do?  We've uninstalled/reinstalled, run a cleaner tool, etc... to no avail.  The other suggest Adobe has had so far is to change the zoom on the file; that did not resolve it either. 

    We are having the same issue.  I've tried the newest version of Acrobat (11.0.6) on windows 7, and windows 8.1.   I've also tried earlier version of acrobat (9 & 10) fully updated.  So far I have found 2 pdf's that consistently cause the "Insufficient data for an image." error.  The pdf i've been using to test is located here   (http://japoncaogreniyorum.files.wordpress.com/2010/08/lets_learn_japanese_basic_ii_1_of_2. pdf)  If i open that pdf in acrobat 11.0.6 i get the error right away.  If i open that same pdf in chrome it opens with no errors and looks as it should.  No missing pages, half pages, or blank pages.  The second pdf i've been testing this with is confidential but it has yielded the same results.  Regardless of the version of acrobat i open the pdf's in i get the error.  In chrome they display and print with no errors.
    Again this is tested with Acrobat 11.0.6 on Windows 7 & 8.1.  This is the version cited by adobe to fix this very problem.
    Any help is much appreciated

  • How to resolve insufficient data for an image

    Hi
    I am using oracle 9i database
    I have written a package using 'utl_smtp' package to send email and attachment.
    When I am attaching a oracle report (in .pdf format;This report has an Image inserted into it) and sent it to an email address, the attachment is sent properly. But when I am opening the attached file it is giving me
    insufficient data for an image instead of showing the picture. After the picture rest all text part of the report is appearing properly.
    The image is 24-bit bmp file.
    To test I have created another .bmp file with smaller size. And in that case it is working fine.
    Could you please resolve the problem.
    Regards

    Hi
    Thanks
    Yes I can open the .bmp file.
    My .pdf file is an oracle report
    I am generating the oracle report report as a .pdf file.
    When I open this .pdf file, it's perfectly fine, no problem with the file.
    But when I send this file as an attachment using utl_smtp, and after receiving the email I open the attachement, I face insufficient data for an image error. And the image is not shown in the .pdf
    regards

  • Acrobat v9.5: Insufficient data for an image

    Hi,
    There are several cases where I have scanned in a document on a Konica Minolta c203 or c451 and have been unable to open the file in Acrobat v9.5.  The document fails to open with "Insufficient data for an image" error. I'm using Windows XP sp3. 
    As a test, I used a original install of Acrobat 9.0.  With this installation, the document opened without any problems.  When I had Acrobat update to 9.2, I was no longer able to open the same file.  I received the "Insufficient data for an image" error.  I performed the next up to v9.5 and still have the same issue.
    I've read the other entries about this error, but there does not seem to be an agreement on how to fix it other than possible downgrading to v8.x or upgrading to v10.x.  Is this the only option?   Any other ideas?
    Thanks,
    -J

    You scanned to what kind of file? Opening a file does not say anything about the format of the scanned image.

  • Reader 11.0.03 - "Insufficient data for an image"

    Hi! Since Adobe Reader 10.1.5 we get the message “Insufficient data for an image” for several pdf-Files.
    Can anybody from Adobe analyse this pdf-files?
    I can send a test-file.
    Thanks
    Thomas

    I have had a closer look at the document, and it appears as if the image on page 2 is somewhat corrupted.
    Then I have done the following
    opened the image with Photoshop; PS seems to have corrected the image.
    saved the image as a PSD file
    opened the damaged image from the PDF with Illustrator
    replaced the image with the corrected image from Photoshop
    saved the image back into the original PDF; the PDF can now be successfully opened
    I have sent you the corrected PDF via Acrobat.com Workspaces to your email address.
    Now of course I am aware that this did not solve the problem for you.  But it did, to me at least, prove that the document contained a damaged image.
    I can see that the document was created with Corel Draw X5.  Now the big question is: is it a problem of Corel Draw, or Adobe Reader / Acrobat 11.0.3?
    Have you tried to contact Corel about the issue?

  • Error in SQL Query The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator. for the query

    hi Experts,
    while running SQL Query i am getting an error as
    The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator. for the query
    select  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price ,
    T2.LineText
    from OQUT T0  INNER JOIN QUT1 T1 ON T0.DocEntry = T1.DocEntry INNER JOIN
    QUT10 T2 ON T1.DocEntry = T2.DocEntry where T1.DocEntry='590'
    group by  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price
    ,T2.LineText
    how to resolve the issue

    Dear Meghanath,
    Please use the following query, Hope your purpose will serve.
    select  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price ,
    CAST(T2.LineText as nvarchar (MAX))[LineText]
    from OQUT T0  INNER JOIN QUT1 T1 ON T0.DocEntry = T1.DocEntry LEFT OUTER JOIN
    QUT10 T2 ON T1.DocEntry = T2.DocEntry --where T1.DocEntry='590'
    group by  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price
    ,CAST(T2.LineText as nvarchar (MAX))
    Regards,
    Amit

Maybe you are looking for