Most efficient way to handle Image sizes with respect to Mobile Screen Size

Hi all,
I and trying to find the best possible way to manage Images with respect to the size of mobile screen.
If I have an image that is best fit on mobile screen size (176 x 144) then how can I make it best fit for (128 x 128) screen size ?
rizzz86

Hey Rizzz86,
You could also scale down a higher resolution image to fit smaller screens, however you will find that resizing by factors other than 2, 4, 8 is harder to achieve with a decent quality and much slower on lower spec devices.
You could also display 128x128 image centered on 176 x 144 screen with black borders around it.
It will look not that bad (i.e. borders of 24 pixels top and bottom and 8 pixels left and right) and will cost you nothing in terms of space or processing.
In the end having an image in 3 different sizes will not increase the MIDlet size by much, unless you are having tens of such images (for example custom UI elements).
Best solution is to have a bit of both I think. Personally I use scaling down by factors that are powers of 2 and displaying with the black borders.
Daniel

Similar Messages

  • Most efficient way to place images

    I am composing a Catalog with a lot of images along with the text.  The source images are often not square (perfectly vertical, portrait).  I also want to add a thin line frame around each one in InDesign to spurce up the look.  I'm spending a lot of time in Photoshop straightening images, because rotating in Indesign to get the image straight results in a non-straight frame.
    Should I create a small library of frames that I place, then place non-straight images in them (and how do I do that) and rotate in InDesign?  Etc?
    What would be the most efficient way to do this?
    Thanks

    To tag onto what Peter said, when you click on the image with the Direct Selection tool you can also use the up and down arrow in the rotation Dialog (where you enter the angle, at the top) to easily change the rotation.
    Also, when you place images in InDesign you can select a number of images at once and continually click the document (or image frame) and place all the images you selected to import. To clarify, you can have a whole bunch of empty image frames on the page then go to file > place and select all your images, then continually click and place them inside each empty frame.

  • Large campaigns - Most efficient way to handle - Not using EMOD

    We're using CRMOD but not EMOD for our marketing campaigns. We are not using CRM for B2B but rather for B2C.
    We have direct mail campaigns that we send to 200,000 of our contacts, eNewsletters send to about 800,000 contacts (using Interspire). We have not been doing this using CRM before.
    I would like to know the best approach to:
    1) Pull out these records from CRM (identify the contacts according to marketing targets) - the limit on the analytics would only allow me to extract that information if I enter criteria to segment my contacts in smaller chunks, e.g. by State / Province. The Segmentation Wizard could be used with smaller chunks less than 50,000. Any other methods available?
    I used the Wizard for a test and got a segment with few records for which I updated CRM with Campaign Recipient record with status 'Sent'.
    2) Sometimes when we hire a company to do our direct mail campaign, our addresses are being verified for change of address. The company we hire then sends back our file with updated addresses. How can we import just the change of address for let's say 35,000 records? In Admin / Data Import, importing addresses does not seem to be available
    3) Import responses into CRM (I did some testing by entering campaign recipients and set the status to 'Message Opened' & Delivery Status to 'Opened' (both using Wizard segment update and also using the user interface) but when I try to pull the #s in Analytics (Campaign Response history) I see the following metric being populated:
    - # Recipients
    but not:
    - # of responders
    - # of Responses
    - # of open responses
    - Avg days to respond
    My questions here are:
    - How on earth do prep my data so that these fields populate? Are they strictly reserved for EMOD?
    - What's the best approach to load 200,000 reponses back into CRM?
    - Is such large volume for campaign recipients and responses going to impact our performance? If so, what approach would be recommended to improve that?
    4) Are the # opt ins, opt outs, global opt ins and global opt outs only available for EMOD or can they be used if another marketing email system is used? If we can use, how can we best load into CRM for large numbers?
    5) Documentation on Metrics is hard to find... Anyone can tell me what fields are involved in the Campaign Responses metrics and the criteria that would impact the metrics?
    Thanks. I know that it is lots of questions but we are new at integrating our marketing campaign efforts into CRM and find that with large numbers, unless one knows the best approach to handle them, one can also impact the system performance greatly.
    Nathalie

    Afternoon,
    1) I would create reports that have filters that segment into smaller chunks and extract them into csv format. Once you have this information into the contacts that you want to use then you can use the bulk data load process.
    2) This information is available for reporting please make sure you you have given access to this information within the profile as well as the layout type is not read only.
    3) this information is viewable within my reporting (Campaign Response History)
    My questions here are:
    - How on earth do prep my data so that these fields populate? Are they strictly reserved for EMOD?
    -----Make the system auto create these values on creation.
    - What's the best approach to load 200,000 responses back into CRM?
    -----Web services
    - Is such large volume for campaign recipients and responses going to impact our performance? If so, what approach would be recommended to improve that?
    -----No this should have no impact at all.....
    4) You would have to import the data from the other mail program to the CRM and EMOD, you could perform this through Data loads back into the CRM.
    5)I believe i have read some of this information though help somewhere i would suggest looking around on help for EMOD.

  • Most efficient way to handle Strings?

    I've heard that Strings are immutable, so you should use StringBuffers. For example, lets say you have an array of Strings called testArray...
    String matchMe = "";
    for (int i = 0 ; i < testArray.length ; i++)
        matchMe = resultOfSomeExpression;
        if (testArray.equals(matchMe))
    doSomeOperation;
    ...matchMe = resultOfSomeExpression will produce a new String object every time you go through the loop. So I've heard you're supposed to make matchMe a StringBuffer and use StringBuffer.replace() to reset it. That seems really cumbersome to me. I've been using StringBuffers as much as possible, but it requires a lot more code that just using Strings. What do other people do? What's the best procedure?
    Ethan

    The best procedure is what works best for you - but, what I suggest is that you use a String object when you don't plan on modifying it, and that you use a StringBuffer object if you're going to modify a sequence of characters or will construct strings dynamically. The rationale behind this is that behind the scene the compiler uses StringBuffer to handle concantenation of strings.

  • Most efficient way to get row count with a where clause

    Have not found a definitive answer on how to do this.  Is there a better way to get a row count from a large table that needs to satisfy a where clause like so:
    SELECT COUNT(*) FROM BigTable WHERE TypeName = 'ABC'
    I have seen several posts suggesting something like 
    SELECT COUNT(*) FROM BigTable(NOLOCK);
    and 
    selectOBJECT_NAME(object_id),row_count from sys.dm_db_partition_stats
    whereOBJECT_NAME(object_id)='BigTable'but I need the row count that satisfies my where clause (i.e.: WHERE TypeName = 'ABC')

    It needs index to improve the performance. 
    - create index on typename column
    - create a indexed view to do the count in advance
    -partition on type name (although it's unlikely) then get count from the system tables...
    all those 3 solutions are about indexing.
    Regards
    John Huang, MVP-SQL, MCM-SQL, http://www.sqlnotes.info

  • What is the most efficient way to use my 2006ish iMac as a second screen for my MacBook pro?

    Hello.
    My line of work really requires the use of two screens, and I have a macbook pro from 2010 and a white iMac from 2006, I believe. My editing software lives on my laptop, so I'd like to use my iMac as the second display. I can't seem to find any male mini dvi to male mini displayport cables online, so I have a feeling they don't exist. Should I just get a mini dvi (male) to dvi (female) for the iMac and a (male) dvi to (male) mini displayport for the laptop? Is this going to cause latency if I'm running videos? Help please!

    Your iMac (from 2006) does not support Target Display Mode, that functionality only became available in Late 2009 with MiniDisplay Port. Even if you were to find a male-to-male cable the iMac isn't going to accept the video from the MacBook Pro in a method you're hoping to achieve (extended desktop mode). I've use ScreenRecycler and the lag is barely noticeable, I think even calling it "lag" is overkill.
    If you need a second monitor for work that is apparently an important accessory, why are they not supplying with a second monitor? Even if you were to find a valid connection solution, it's response time would still be slower than a direct link from the MacBook Pro.

  • Most efficient way to do multiple crops on many images?

    I have a large number of images shot in the default 4:3 aspect ratio. I need to print almost 200 as 4x6, and and undetermined but certainly large number as 8x10, so I have a lot of cropping to do. What would seasoned Aperture users suggest as the most efficient way to do this? I've thought of two possibilities:
    1. Duplicate every image I need to print in both sizes and crop one for each size print. This is the best option I've thought of, but it would certainly eat a lot of drive space.
    2. Do all the 8x10 crops, revert to original, and do the 4x6 crops. Saves disk space, but leave me with only the 4x6 crop in Aperture. (Sounds like I want to have my cake and eat it too, I suppose.)
    Anyway, there are a lot of you out there who have logged a lot more Aperture hours than I have. Is there a better workflow I have not considered?
    Thanks,
    Ben

    Hello Ben,
    the beauty of Aperture is, that you can have many versions of an image without needing much extra disk space.
    To have three versions of an image cropped to different aspect ratios, don't create duplicate master images but use (from the Aperture main menu)  "Photos -> Duplicate version" or "New Version from Master". Then crop this new version to a different aspect ratio. Aperture will not really render a new image file but just store the cropping rectangle to be able to create the cropped image when you export or print it.
    So you can have an original version, a 8x10 version, a 4x6 version in your library without needing much extra space - that is one of the rare occasions when you can have your cake and eat it too
    Regards
    Léonie

  • [11g] most efficient way to calculate size of xmltype type column

    I need to check the current size of some xmltype column in a BIU trigger.
    I don't think it's good to use
      length(:new.xml_data.GetStringVal());
    or
      dbms_lob.GetLength(:new.xml_data.GetClobVal());
    What's the most efficient way to get the storage size?
    I don't need the string serialized size.
    It could also be the internal storage size (incl. administration data overhead).
    - thanks!
    regards,
    Frank

    > May I ask for what reason you need to know it?
    I need to handle very large XML document output, which currently hits the internal xmltype limitation of 4GByte, when aggregating XML document fragments for this.
    > You'll get a relevant answer if you give us relevant information :
    > - exact db version
    SELECT * FROM PRODUCT_COMPONENT_VERSION;
    product
    version
    status
    1
    NLSRTL
    11.2.0.3.0
    Production
    2
    Oracle Database 11g Enterprise Edition
    11.2.0.3.0
    64bit Production
    3
    PL/SQL
    11.2.0.3.0
    Production
    4
    TNS for Linux:
    11.2.0.3.0
    Production
    > - DDL of your table
    > XML stored as XMLType datatype can use different storage models, depending on the version.
    I don't use dedicated storage clause.
    But i am hitting the problem already for aggregation into some xmltype variable in PL/SQL
    - BEFORE writing back to a result table.
    Can i avoid such problems, when writing to a table DIRECTLY w/o intermediate xmltype PL/SQL variable
    - depending on the storage clause?
    The reason why asking how to get the size of some xmltype (in table column and/or in PL/SQL variable) is, that i am thinking of a threshold detection.
    In case threshold is reached: outsource XML fragment so far to some separate CLOB storage, and insert a smaller meta-information reference representing that in the output document.
    Finally leave up to the client system to use <xs:include> (or alike) to construct the complete document.
    rgds,
    Frank

  • Just got girlfriend a new iPad2. Her iMac is a PowerPC G5 (Tiger version 10.4.11) with 512 mb RAM. What's the simplest, most efficient way to get her iPad2 up and running and synced to her Mac?

    Just got girlfriend a new iPad2. Her iMac is a PowerPC G5 (Tiger version 10.4.11) with 512 mb RAM. What's the simplest, most efficient way to get her iPad2 up and running and synced to her Mac?

    Most of the Apple store sales people and some of the genius bar people are only knowledgable on Apple's more recent offerings. They are not very knowledgable, I found, on older PowerPC based Apple computers, I'm afraid.
    Here's the real scoop.
    Your girlfriend's G5 can only install up to OS X 10.5 Leopard. This is the last compatible OS X version for PowerPC users.
    OS X 10.6 Snow Leopard and OS X10.7 Lion are for newer Intel CPU Apple computers.
    Early iMac G5's can only have up to 2 GBs of RAM.
    Later iMac G5's (2005-2006) could take up to 2.5 GBs of RAM
    2 GBs of RAM will run OS X 10.5 Leopard just fine.
    The very latest iTunes (10.5.2) can be installed and runs on both PowerPC and Intel CPU Macs.
    However, there are certain new iTunes feature that won't work without an Intel Mac.
    One of iOS and iTunes feature is sync'g wirelessly over WiFi.
    This will not work unless you have an iDevice running iOS 5 and Intel Mac running 10.6 Snow Leopard or better.
    Although, I was disappointed I would not be able to do this with my G4 Mac, it's not a biggie problem for me.
    So, your girlfriend's computer should be fine for what she intends to use it for.
    The Apple people either just plain didn't know or we're trying to get you to think about buying a new Mac.
    At least, as of now, not truly necessary.
    If Apple, at some later point, drops support for PowerPC users running 10.5, then would be the time to consider a new or "newer" Intel CPU Mac.
    My planned Mac computer upgrade is seeking out a " newer" last version G5 for my "new" Mac.
    I can't afford, right now, to replace all of my core PowerPC software with Intel versions, so I need to stick with the older PowerPC Macs for the time being. The last of the G5's is what I seek.

  • What is the most efficient way to convert a static site to a responsive site using Dreamweaver?

    I need to convert an old site made in Dreamweaver to be responsive to any monitor size. What is the most efficient way to do this?

    Depending on what you have to work with and how it was coded, it might be doable and then again not.  Suffice it to say, there are no magic buttons that will do this for you. Also consider that mobile & tablet users interact differently with their web devices. So your navigation & forms must be finger friendly.  Also images & content must make mobile users happy without killing their dataplans.  There's a lot of planning that goes into making a good Responsive Web site.
    Nancy O.

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • Most efficient way to consume log files

    Hello everyone,
    I've been absent from the forums for awhile but I'm back at it now... 
    I have a question about the most efficient way to consume log files.  I read in Powershell in action, by Bruce Payette that using a switch statement with a regex worked pretty well, that being said I haven't tried it yet. Select-string is working pretty
    well for me but I have about 10 different entry types that I need to search logs for every 5 minutes and I'm scanning about 15 GB of logs at every interval.  Anyway, if anyone has information about how to do something like that as quickly as possible
    I'd appreciate it.
    1.  piping log files that meet my criteria to select-string
       - This seems to work well but I don't like searching the same files over and over again
    2. running logs through get-content and then building a filter statement
      - This is ok but it seems to use up a fair bit of memory
    3. Some other approach that I haven't thought of yet.
    Anyway, I know this is a relatively nebulous question, sorry about that.  I'm hoping that someone on here knows a really good way to find strings in logs files quickly.
    Hope that helps! Jason

    You can sometimes squeeze out more speed at the expense of memory usage, but filters are pretty fast. I don't see a benefit to holding the whole file in memory, in this case.
    As I mentioned earlier, though, C# code will usually blow PowerShell away in terms of execution time.  Here's a rewrite of what I just did (just for the INI Section pattern, to keep the post size down):
    $string = @'
    #Comment Line
    [Ini-Style Section Line]
    Key = Value Line
    192.168.0.1 localhost
    Some line that doesn't match anything.
    Set-Content -Path .\test.txt -Value $string
    Add-Type -TypeDefinition @'
    using System;
    using System.Text.RegularExpressions;
    using System.Collections;
    using System.IO;
    public interface ILineParser
    object ParseLine(string line);
    public class IniSection
    public string Section;
    public class IniSectionParser : ILineParser
    public object ParseLine(string line)
    object o = null;
    Match match = Regex.Match(line, @"^\s*\[([^\]]+)\]\s*$");
    if (match.Success)
    o = new IniSection() { Section = match.Groups[1].Value };
    return o;
    public class LogParser
    public static IEnumerable ParseFile(string fileName, ILineParser[] lineParsers)
    using (StreamReader sr = File.OpenText(fileName))
    string line;
    while ((line = sr.ReadLine()) != null)
    foreach (ILineParser parser in lineParsers)
    object result = parser.ParseLine(line);
    if (result != null)
    yield return result;
    $parsers = @(
    New-Object IniSectionParser
    $results = [LogParser]::ParseFile("$pwd\test.txt", $parsers)
    $results
    Instead of defining separate classes for each type of line and output object, you could probably do something more generic with delegates (similar to how I used ScriptBlock.Invoke() in the PowerShell example), but it might sacrifice some speed to do so.

  • Iterators - most efficient way to get last object?

    Collection of objects, such as from an EJB, and I want to only get the last object. What is the most efficient way of doing this?
    TIA!

    addendum to previous post.
    Allthough, that test would call to mind the question what are you going to do if it is 0? Throw an exception, as the code stands it will throw an index out of bounds exception to the calling process which could be handled there, and in truth probably should be.
    Modified and expanded example
    public class ListThingie{
         java.util.ArrayList listOfThingies = null
         public ListThingie(){
              super() ;
              listOfThingies = new java.util.ArrayList() ;
              buildListOfThingies() ;
         public void buildListOfThingies(){
              //... build the ArrayList here
         public Object getLastThingie() throws IndexOutOfBoundsException{
              return listOfThingies.get((listOfThingies.size()-1)) ;
    public class ThingieProcessor{
         public static void main( String[] args){
              try{
                   ListThingie listThingie = new ListThingie() ;
                   System.out.println(listThingie.getLastThingie()) ;
              catch(IndexOutOfBoundsException e){
                   System.out.println(e.getMessage()) ;
    }There, that's better.

  • What is the most efficient way to turn an array of 16 bit unsigned integers into an ASCII string such that...?

    What is the most efficient way to turn a one dimensional array of 16 bit unsigned integers into an ASCII string such that the low byte of the integer is first, then the high byte, then two bytes of hex "00" (that is to say, two null characters in a row)?
    My method seems somewhat ad hoc. I take the number, split it, then interleave it with 2 arrays of 4095 bytes. Easy enough, but it depends on all of these files being exactly 16380 bytes, which theoretically they should be.
    The size of the array is known. However, if it were not, what would be the best method?
    (And yes, I am trying to read in a file format from another program)

    My method:
    Attachments:
    word_array_to_weird_string.vi ‏18 KB

  • What is the best, most efficient way to read a .xls File and create a pipe-delimited .csv File?

    What is the best and most efficient way to read a .xls File and create a pipe-delimited .csv File?
    Thanks in advance for your review and am hopeful for a reply.
    ITBobbyP85

    You should have no trouble doing this in SSIS. Simply add a data flow with connection managers to an existing .xls file (excel connection manager) and a new .csv file (flat file). Add a source to the xls and destination to the csv, and set the destination
    csv parameter "delay validation" to true. Use an expression to define the name of the new .csv file.
    In the flat file connection manager, set the column delimiter to the pipe character.

Maybe you are looking for

  • Numbers 3.2 on Mavericks - How do you make the Stepper Function cells default to "0" instead of "1"?

    In Numbers 3.2 using Mavericks 10.9.3 The Stepper funtion fills the cells with a "1", after I have set the Minimum value to "0". How do I get the cells to be filled with "0" automatically? Even a "Blank/Empty" cell would be fine (better) with me, but

  • Assigned PR from MRP Run

    Hi everybody I've this problem. I've got a material with procurement type 'F', special procurement '30' and PD as MRP type. I created an inforecord and I flagged the vendor as regular vendor. I maintained the source list for the vendor and material w

  • 0GLACCEXT_T011_HIER error using RSA3

    I have a problem when execute RSA3 for data source 0GLACCEXT_T011_HIER The error information: Runtime Errors         SAPSQL_ARRAY_INSE Except.                CX_SY_OPEN_SQL_DB Short text     The ABAP/4 Open SQL array insert results in duplicate datab

  • Disable the CTRL+ALT+DELETE when log in xp T60

    I have T60 and i just installed lenovo xp again, and every time i log in XP i need to press CTRL+ALT+DELETE, how i can get this uncomfortable think off? I just want to sing by my fingerprint

  • Is there a mailing list for the OID in oracle?

    Hi All, Is there a mailing list in Oracle for the OID discussion? Thanks, Michelle