Spotlight issues again - only on a larger scale.

Hi there
A couple of weeks back I reported issues with spotlight.  It had stopped finding anything beyond mid April.  It couldfind anything before the date but nothing added to the machine after. 
I submitted a question got it answered and everything was fine.
https://discussions.apple.com/message/21828060?ac_cid=op123456#21828060
Went to do a search today and nothing.  No apps no gifs no jpgs no html docs and nothing at all with "a"  or "." in the file name. Nothing, nada, zilch,  no matter what option I entered as a search as far as spotlight is concerned I have nothing on the machine not even an operating system. It cant find anything on the external drives either.
Can anyone suggest a reason for this problem, I don't want to re index my hard drive and 3 external drives everytime  I want to look for something.
Thanks in advance.
Edit -  Added twist if I use spotlight in the menu bar if finds nothing then it says its indexing the imac then it shows some results and tells me its indexing again but the minute I click on "open results in finder" all the results disappear and finder shows 0 results and spotlight stops indexing.
The search option in the finder wont find anything. I have to go back to the main menu bar and type my search in there but that just starts the whole cycle over again, not results then it says its indexing then when I try to open the results in finder everything vanishes again.
HELP!

Hi there
A couple of weeks back I reported issues with spotlight.  It had stopped finding anything beyond mid April.  It couldfind anything before the date but nothing added to the machine after. 
I submitted a question got it answered and everything was fine.
https://discussions.apple.com/message/21828060?ac_cid=op123456#21828060
Went to do a search today and nothing.  No apps no gifs no jpgs no html docs and nothing at all with "a"  or "." in the file name. Nothing, nada, zilch,  no matter what option I entered as a search as far as spotlight is concerned I have nothing on the machine not even an operating system. It cant find anything on the external drives either.
Can anyone suggest a reason for this problem, I don't want to re index my hard drive and 3 external drives everytime  I want to look for something.
Thanks in advance.
Edit -  Added twist if I use spotlight in the menu bar if finds nothing then it says its indexing the imac then it shows some results and tells me its indexing again but the minute I click on "open results in finder" all the results disappear and finder shows 0 results and spotlight stops indexing.
The search option in the finder wont find anything. I have to go back to the main menu bar and type my search in there but that just starts the whole cycle over again, not results then it says its indexing then when I try to open the results in finder everything vanishes again.
HELP!

Similar Messages

  • Are there any issues and potential solutions for large scale partitioning?

    I am looking at a scenario that a careful and "optimised" design has been made for a system. However, It is still resulted in thousands of entities/tables due to the complex business requirements. The option of partitions must also be investigated due to large amount of data in each table. It could potentially result in thousands partitions on thousands tables, if not more.
    Also assume that powerful computers, such as SPARC M9000, can be employed under such a scenario.
    Keen to hear what your comments are. It will be helpful if you can back up your statements with evidence and keep in the context of this scenario.

    I did see your other thread, but kept away from it because it seemed to be getting a bit heated. Some points I did notice:
    People suggested that a design involving "thousands" of entities must be bad. This is neither true not nor unusual. An EBS database may have fifty to a hundred thousand entities, no problem. It is not good or bad, just necessary.
    The discussion of "how many partitions" got stuck on whether Oracle really can support thousand of partitions per table. Of course it can - though you may find case studies that if you go over twenty or thirty thousand for a table, performance may degrade (shared pool issues, if I remember correctly).
    There was discussion of how many partitions anyone needs, with people suggesting "not many". Well, if you range partition per hour with 16 hash sub-partitions (not unreasonable in, for example, a telephone system) you have 384 per day which build up quite quickly unless you merge them.
    You own situation has never been fully defined. A few hundred million rows in a few TB is not unusual at all. But when you say "I don't have a specific problem to solve" alarm bells ring: you are trying to solve a problem that does not exist? If you get partitioning right, the benefits can be huge; get it wrong, and it can be a disaster. Don't do it just because you can.  You need to identify a problem and prove, mathematically, that your chosen partitioning strategy will fix it.
    John Watson
    Oracle Certified Master DBA

  • ACS issues in large scale network with Prime Infra and WAAS express

    Hi,
    I wonder if there is a common practice or a recommended way for deploying large scale network where there are Prime Infrastructure (PI) and WAAS Central manager keep logging into routers (scale of 1000 or more) to collect statistics. The way PI and WAAS CM collect stats from the routers (besides using SNMP) is that they log in (authenticate) themselves with there usernames and password and issue multiple show and config commands on the routers. Imagin this routine happens every 5 - 10 minutes with all 1000+ routers at the same time and the impact to the ACS server in terms of authentication requests and AAA logs. Appreciate if somebody could recommend a solution where these elements can work together in a large scale network.
    Thanks,
    Tos

    The AEBS is connected to the TC via an ethernet run from the basement to the main floor... its not connected wirelessly.
    The "extend" feature is intended for wireless, not wired connections. Since you have the base stations connected by Ethernet, the downstream router just need to be reconfigured as a bridge. The bridged router would then perform as a combination Wireless Access Point and Ethernet switch. Neither base station should be configured for "extending."
    Basically, you will want both to be configured for a "roaming" network.
    o Setup the base station connected to the Internet to "Share a public IP address."
    Internet > Internet Connection > Connection Sharing: Share a public IP address
    o Setup the remaining base station(s), as a bridge.
    Internet > Internet Connection > Connection Sharing: Off (Bridge Mode)
    For each base station in the roaming network:
    o Connect to the same subnet of the Ethernet network
    o Provide a unique Base Station Name
    o The Network Name should be identical
    o If using security, use the same encryption type (WEP, WPA, etc.) and password.
    o Make sure that the channel is set at least three channels apart from the next base station.
    while the TC is running at 2.4ghz since my MBP is connected at speeds around 240 to the AEBS at the same time that my ipod is connected to the TC at speeds of only 54 max.
    The iPod is a 802.11b/g wireless device. It cannot connect at greater than the maximum bandwidth for that mode ... which is 54 Mbps, regardless of the bandwidth available.

  • Flex deployed on a large scale?

    We plan on developing a new product and Flex popped in my
    mind as a development platform. I know a good deal of Flex 1.5, but
    only used it for personal sites.
    My question is how well Flex behaves in a large scale
    environment to those who have deployed it in such. Server load will
    be at least thousand / day.
    Thanks!

    Hmm... I made one medium sized application in 1.5 (approx 10
    screens, user access <1000 times per day) and it seems to be
    working alright for the client.
    Now I am working on a major application (over 20 main
    screens, and definately access>1000 times per day) and it is not
    going well. I am really worried about the bugs and memory issues of
    Flex 2.0. I have also not found a sure-fire way to address these
    potential issues. I can say this: for the size of application we
    are making, Flex and Flash Player just aren't up to the job.
    Compiled and executed as a single .swf application results in 755MB
    ram usage and for some reason a constant CPU access of 60% (Pentium
    4 proc.) after accessing every screen. And this is just FlashPlayer
    doing what it is supposed to. Me, not being a computer engineer,
    can't really address these problems. Flex and AS aren't C. I can't
    control memory usage with my code. By breaking up the huge
    application into smaller ones and then loading those via an
    SWFLoader I may be able to avoid this rampant resource hogging but
    it's sort of illogical from an application architecture standpoint
    because this is ONE application.
    As a developer, I can see plenty of places to streamline the
    application but this simply isn't possible when dealing with the
    client. They want this screen to look and act this way and that
    screen to look and act the other way. I can talk about how if both
    screens use the same layout and logic they can both use the same
    template class, share static resources, blah blah until I am blue
    in the face but it won't matter because they are the client and
    they decide how the application is going to look--at the expense of
    streamlining. That's just the real world. Then I have to somehow
    make it work.
    By the way... before you think "just use view states!", I do
    use those--and bitwise logic flags for more complicated
    configurations--it's still not enough, although it did cut approx
    160 screens in documentation form down to just 20 in
    implementation.
    In worst case scenarios, I have to deny the client what they
    want and if they ask me why, I have to reply "it can't be done with
    Flex". Then their satisfaction in the product drops. Flex suddenly
    isn't as incredible as it seemed at first. Doesn't matter how
    pretty and animated the screens are when if you run them over an
    hour your computer slows to a halt or .ttc fonts stop loading (HUGE
    issue here in Japan).
    I have yet to see a sample application that comes close to
    the scale of our current project: A library book browser? neat but
    that's just square one; A Commodore 64 emulator? cute. no place in
    business; A real-estate browswer? in our project that would be the
    equivalent of ONE SCREEN out of the entire application.
    I like Flex. It's fun--on a small scale. But I never want to
    develop a real world business application using it again. There are
    way more (and way more skilled) Java, JSP, PHP, etc. etc.
    developers out there than Flex developers who can make much more
    robust applications. It's a shame the client got caught up in the
    hype of Flex RIA before the technology was ready for the task.
    Very long story short: Beware using Flex for an involved
    application.
    It's going to require exponentially more time than a smaller,
    less ambitious project--especially if you don't purchase FDS. And
    oh my god implementing a Flex application on a legacy Struts
    framework... kill me now! As much as I hate "page-refresh"
    applications, Flex (both 1.5 and 2.0) has not proven to be the
    god-send that I had hoped and dreamed it would be as a developer.
    What can you expect, though? It's only been out a few years.... And
    as far as clients' perspectives go, the price for FDS also
    certainly doesn't help make it appealing. That is why it is so
    embarrassing to tell them their dream application is quickly
    becoming an egregious memory hog.
    Anyway, good luck if you take on your project with flex. Just
    be careful!

  • Very-large-scale searching in J2EE

    I'm looking to solve a very-large-scale searching problem. I am creating a site
    where users can search a table with five million records, filtering and sorting
    independantly on ten different columns. For example, the table might be five million
    customers, and the user might choose "S*" for the last name, and sort ascending
    on street name.
    I have read up on a number of patterns to solve this problem, but anticipate some
    performance issues. I'll explain below:
    1) "Page-by-Page Iterator" or "Value List Handler"
    In this pattern, it appears that all records that match the search criteria are
    retrieved from the database and cached on the application server. The client (JSP)
    can then access small pieces of the cached results at a time. Issues with this
    include:
    - If the customer record is 1KB, then wide search criteria (i.e. last name =
    S*) will cause 1 GB transfer from the database server to app server, and then
    1GB being stored on the app server, cached, waiting for the user (each user!)
    to ask for the next 10 or 100 records. This is inefficient use of network and
    memory resources.
    - 99% of the data transfered from the database server will not by used ... most
    users flip through a couple of pages and then choose a record or start a new search
    2) Requery the database each time and ask for a subset
    I haven't seen this formalized into a pattern yet, but the basic idea is this:
    If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
    records from the db. If the user asks for the next page, requery the database
    and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
    The query is re-performed, causing the Oracle server to do another costly "execute"
    (bad on 5M records with sorting).
    To solve this, I've beed trying to enhance the second strategy above by caching
    the ResultSet object in a stateful session bean. Unfortunately, this causes a
    "ResultSet already closed" SQLException, although I ensure that the Connection,
    PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
    seen this on newsgroups ... it appears that WebLogic is forcing the Connection
    closed. If this is how J2EE and pooled connections work, then that's fine ...
    there's nothing I can really do about it.
    Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
    it yet, but it wouldn't be a great solution as it would be using Oracle-specific
    functionality (we are trying to be db-agnostic).
    More information:
    - BEA WebLogic Server 8.1
    - JDBC: Oracle's thin driver provided with WLS 8.1
    - Platform: Sun Solaris 5.8
    - Oracle 9i
    Any other ideas on how I can solve this issue?

    Michael McNeil wrote:
    I'm looking to solve a very-large-scale searching problem. I am creating a site
    where users can search a table with five million records, filtering and sorting
    independantly on ten different columns. For example, the table might be five million
    customers, and the user might choose "S*" for the last name, and sort ascending
    on street name.
    I have read up on a number of patterns to solve this problem, but anticipate some
    performance issues. I'll explain below:
    1) "Page-by-Page Iterator" or "Value List Handler"
    In this pattern, it appears that all records that match the search criteria are
    retrieved from the database and cached on the application server. The client (JSP)
    can then access small pieces of the cached results at a time. Issues with this
    include:
    - If the customer record is 1KB, then wide search criteria (i.e. last name =
    S*) will cause 1 GB transfer from the database server to app server, and then
    1GB being stored on the app server, cached, waiting for the user (each user!)
    to ask for the next 10 or 100 records. This is inefficient use of network and
    memory resources.
    - 99% of the data transfered from the database server will not by used ... most
    users flip through a couple of pages and then choose a record or start a new search
    2) Requery the database each time and ask for a subset
    I haven't seen this formalized into a pattern yet, but the basic idea is this:
    If a clients asks for records 1-100 first (i.e. page 1), only fetch that many
    records from the db. If the user asks for the next page, requery the database
    and use the JDBC API's ResultSet.absolute(int row) to start at record 101. Issue:
    The query is re-performed, causing the Oracle server to do another costly "execute"
    (bad on 5M records with sorting).
    To solve this, I've beed trying to enhance the second strategy above by caching
    the ResultSet object in a stateful session bean. Unfortunately, this causes a
    "ResultSet already closed" SQLException, although I ensure that the Connection,
    PreparedStatement, and ResultSet are all stored in the EJB and not closed. I've
    seen this on newsgroups ... it appears that WebLogic is forcing the Connection
    closed. If this is how J2EE and pooled connections work, then that's fine ...
    there's nothing I can really do about it.
    Another idea is to use "explicit cursors" in Oracle. I haven't fully explored
    it yet, but it wouldn't be a great solution as it would be using Oracle-specific
    functionality (we are trying to be db-agnostic).
    More information:
    - BEA WebLogic Server 8.1
    - JDBC: Oracle's thin driver provided with WLS 8.1
    - Platform: Sun Solaris 5.8
    - Oracle 9i
    Any other ideas on how I can solve this issue? Hi. Fancy SQL to the rescue! If the table has a unique key, you can simply send a
    query per page, with iterative SQL that selects the next N rows beyond what was
    selected last time. Eg:
    Let variable X be the highest key value you've seen so far. Initially it would
    be the lowest possible value.
    select * from mytable M
    where ... -- application-specific qualifications...
    and M.key >= X
    and (100 <= select count(*) from mytable MM where MM.key > X and MM.key < M.key and ...)
    In English, this says, select all the qualifying rows higher than what I last saw, but
    only those that have fewer than 100 qualifying rows between the last I saw and them (ie:
    the next 100).
    When processing this query, remember the highest key value you see, and use it for the
    next query.
    Joe

  • Your suggestion needed: Implementing large scale knowledgebase

    Hi,
    I am attempting to create a large scale knowledgebase (its size in the short term wont be huge - but it needs to have the capability of vastly increasing in size).
    The information I will be storing is information extracted from documents on Wikipedia. For example, given the sentence:
    "Tropical Storm Allison was a tropical storm that devastated southeast Texas"
    I want to to break the information down into facts such as "Tropical Storm -> Allison", "Allison -> devastated southeast Texas".
    My question is, what would be the best implementation to use to implement the knowledgebase?
    I have two main ideas at present:
    1. A realtively simple, single table MySQL database (however trying to determine how to insert all the facts I encounter into the same columns is an issue I am yet to resolve)
    2. A type of Tree for each page (subject) I extract information from. E.g for the above example:
    Tree Root -> "Tropical Storm"
    Child Node of Tree Root -> "Allison"
    Child of node Allison -> "devastated"
    Child of node devastated -> "southeast Texas"
    At present I am thinking the Tree opton may be a better way to implement the knowledgebase. However if I am going to represent each wikipedia page (subject) as an individual Tree I would need a way of storing them. I.e. in an object database or other method?
    Firstly does anyone know how I may save each tree, so when its needed I could retrieve and search it for facts?
    Secondly does anyone know of a better way I could implement such a system?
    Thanks for your time and thoughts!

    Your real question appears to be how do I save a "tree" type stucture in a relational database. And the answer is, it's pretty simple.
    If your tree is simple enough (by which I mean that each parent may have many children but each child only will have one parent) then you can do it all in one table.
    tblNode
    id int primary key
    parentid int (foreign key to itself.. sort of)
    name varchar
    so then for your example you would have
    id parentid name
    1 0 Tropical Storm
    2 1 Allison
    3 2 devasted southeast Texas
    4 0 Hurricane
    So then you would have a tree with Tropical Storm and Hurricane at the top. Underneath Tropical Storm is Allison. Underneath Allison is devasted...
    So you can just query for the "root" nodes with
    SELECT id, name FROM tblNode WHERE parentid=0
    And then to get the child nodes for a particular node
    SELECT id,name FROM tblNode WHERE parentid=1
    etc
    If you are going to have situations where the child has more than one parent then you need two tables.

  • ELearning for Large Scale System Change

    Our next project is training for a large scale company wide system upgrade. Our users are very dependent on the software that will be replaced and several departments will need to learn their job anew. Any suggestions on how to use eLearning to increase adoption and comprehension of the new software?

    Hi Lincoln,
    I've worked on a number of large-scale IT change projects for international clients. I can make a few suggestions, some Captivate related, some more general eLearning related.
    On projects like this I tend to produce three types of training: face-to-face, interactive tutorials/simulations & job aids. Ideally the three are planned together, allowing you to create a single instructional design plan. You want people to be introduced to the system, learning to be reinforced and then everybody to be supported.
    The face-to-face training usually contains lots of show and tell, where the users are shown how to do tasks and then have a go at doing this themselves. Ideally a number of small tasks are shown and repeated, then they are all brought together by getting the learners to follow realistic scenarios that require all of the discrete tasks to be performed together. I find that lots of training doesn't integrate deeply with people's real-life jobs and the use of real world scenarios helps to improve retention and performance. I have made materials where the show-and-tell pieces have been pre-recorded, which you can do with Captivate.
    The interactive tutorials are usually used as follow-on material to the face-to-face modules, allowing learners to go through simulations with guidance when they do something unexpected, though sometimes there is no face-to-face training and the interactive tutorials have to deliver all of the teaching. Sometimes these include sections that merely show users and ask questions afterwards. Sometimes they become very complex branching scenarios. I usually build all of this in Captivate, though I do find it quite buggy and frustrating at times.
    Finally, I build small job aids. These are very specific and show how to do a well defined function, such as changes an existing customer's address. Sometimes these are Captivate movies, sometimes they are PDF files, often they are implemented as both. They can be embedded and/or linked from the system help screens and FAQs, as well as used in support responses and post-training emails. The movies tend to be short and sweet: 30-120 seconds long.
    In an ideal world the number of job aids grows rapidly after implementation in response to user support requests, though in reality I often have to anticipate what users are going to find difficult and create all of these prior to launch.
    If you are going to use Captivate for your project, then I suggest that you test, test and test again before agreeing what you will deliver. It's a great bit of software, in theory, but it is quite buggy. I'm working on a project with CP6.1 and I'm having lots of audio synch problems and video corruption issues publishing my final work to as MP4 videos.
    In terms of effort, my rule of thumb is 20% planning, 60% design and scripting and 20% implementation.
    I hope this helps,
    David

  • OS X 10.8.3, TM & Spotlight Issue

    I just upgraded to OS X 10.8.3 and my TM is no longer backing up. This appears to be a Spotlight issue as all I get is Estimating Index Time when I open Spotlight.  I tried the trick of making my HD private in Spotlight then public again - same result it's still estimating index time.  I've read lots of posts about Spotlight being buggy after operating system upgrades although most are 1-2 years old.  Anyone have a solution to this?

    The pink box in #D2 of Time Machine - Troubleshooting shows how to do it on a Time Machine backup drive; use the same procedure, but apply it to your internal HD instead.
    The index is normally hidden; you need to be able to see it to delete it via the Finder.  Installing the Tinker Tool app via the link there is one way of making them visible.
    Then locate and delete the index (but the one on your Mac's HD).
    Which part of that do you need help with?

  • Tweaking product prices on a large scale - how?

    My Client has a software store on BC. His supplier is constantly changing their prices and my client wants to be able to quickly review prices, make changes to reflect supplier prices every few days
    If I export the Product List the Excel export is unusable with it full of HTML markup from the product descriptions.
    Apart from opening each product individually to check and tweak prices how is everyone ammending prices on a large scale.... My client only has 60 products at the moment but this is soon to quadruple and I have prospective clients looking at BC for their ecommerce solution and they have thousands of items.
    Regards
    Richard

    If its just prices you want to input, see if you can just eliminate all the other columns that are not needed and only import the price column with its product identifier ofcourse, and see if it will just update the price and not have to deal with the descriptions...Just a thought...

  • Large Scale Digital Printing Guidelines

    Hi,
    I'm trying to get a getter handle on the principles and options for creating the best large and very large scale prints from digital files.  I'm more than well versed in the basics of Photoshop and color management but there remain some issues I've never dealt with.
    It would be very helpful if you could give me some advice about this issue that I've divided into four levels.  In some cases I've stated principles as I understand them.  Feel free to better inform me.  In other cases I've posed direct questions and I'd really appreciate professional advice about these issues, or references to places where I can learn more.
    Thanks alot,
    Chris
    Level one – Start with the maximum number of pixels possible.
    Principle: Your goal is to produce a print without interpolation at no less than 240 dpi.  This means that you need as many pixels as the capture device can produce at its maximum optical resolution.
    Level two – Appropriate Interpolation within Photoshop
    Use the Photoshop Image Size box with the appropriate interpolation setting (Bicubic Smoother) to increase the image size up to the maximum size of your ink jet printer.
    What is the absolute minimum resolution that is acceptable when printing up to 44”?
    What about the idea of increasing your print size in 10% increments? Does this make a real difference?
    Level three - Resizing with vector-based applications like Genuine Fractals?
    In your experience do these work as advertized and do you recommend them for preparing files to print larger than the Epson 9900?
    Level four – Giant Digital Printing Methods
    What are the options for creating extremely large digital prints?
    Are there web sites or other resources you can point me to to learn more about this?
    How do you prepare files for very large-scale digital output?

    While what you say may be true, it is not always the case. I would view a 'painting' as more than a 'poster' in terms of output resolution, at least in the first stages of development. Definately get the info from your printer and then plan to use your hardware/software setup to give you the most creative flexibility. In other words - work as big as you can (within reason, see previous statement) to give yourself the most creative freedom. Things like subtle gradations and fine details will benefit from more pixels, and can with the right printer be transferred to hard copy at higher resolutions (a photo quality ink jet will take advantage of 600ppi) if that's what you're going for.
    Additionally it's much easier to down scale than to wish you had a bigger image after a 100 hours of labor...

  • Applying Oil Paint Filter to Large Scale Images

    I need to apply the effects available from the Oil Paint filter to very large, 80mb images. The filter works exactly as I need it to on small images, but not at large scale. A comment I have heard in a Lynda.com video on the Oil Paint filter mentioned that the filter does not work well on large images. However, I REALLY need this, even if I need to pay someone to write a program that can do it! Does anyone know if / how I can get the filter to work for large images and / or if there is a third-party plug-in that will provide the same results? Having this filter work on large scale images could make or break a business idea I have so finding a solution is extremely important to me.

    What's the problem you're having with applying it to an 80 MB image?  Is it that the effects don't scale up enough?
    Technically it can run on large images if you have the computer resources...  I've just successfully applied it to an 80 MB image, and with the sliders turned all the way up it looks pretty oil painty, though it kind of drops back into a realistic looking photo when zoomed out...
    If it's just that the sliders can't go high enough, given that it's a very abstract look you're trying to achieve, have you considered applying it at a downsized resolution, then upsampling, then maybe applying it again?  This is done that way...
    Oh, and by the way, Oil Paint has been removed from Photoshop CC 2014, so if you're planning a business based on automatically turning people's large photos into oil paintings you should assume you'll be stuck with running the older Photoshop version.
    -Noel

  • 6.0.1 ready for large scale deployment?

    I have a 210 we are in the process of migrating to, at the end of the migration we will have 700 or so 2800 IOS firewalls. We are half way through the process and already MARS is having CPU issues, although I think this may be bug related. Is 6.0.1 ready for a large scale deployment and heavy load? Im hoping this may bring the cpu down a little but I do not want to introduce other issues.
    Thanks

    I ended up doing a clean install of a MARS50, originally 4.3.6, and still have problems with graphgen shutting down.
    I have no support on my MARS 50 so I'm stuck waiting for a possible future upgrade beyond 6.0.1. It's odd that the same ISO install gives different results on the same hardware. MARS in not exactly an "appliance" like a PIX but still...ISO based installations should produce identical installations
    /Fredrik

  • HT4623 i was updating my iphone 4 wirelessly to ios7.0.2 ,after downloading the software it stated to install but i never got started again. only apple logo is appearing on the screen ....please help me out ..thanks

    I was updating my iphone 4 wirelessly to ios7.0.2 ,after downloading the software it stated to install but i never got started again. only apple logo is appearing on the screen ....please help me out ..thanks

    Hello taranpahara,
    The following article provides steps that can be quite helpful in getting your iPhone updated.
    iOS: Troubleshooting update and restore issues
    http://support.apple.com/kb/TS1275
    Cheers,
    Allen

  • Working w/ large scale AI files

    What is the best way to work with large scale (wall mural) files that contain many gradients, shadows. etc. Any manipulations take a considerable time to redraw. Files are customer created, and we are then manipulating from there in-house. We have some fairly robust machines ( mac pro towers- 14GB RAM, RAID scratch disk).. I would guess there is a way to work in a mode that does not render effects, allowing for faster manipulation? Any help would be great- and considerably help reduce our time.
    First post- sorry if did something wrong- question / title wise/
    THX!

    In a perfect world, the customers would be creating their Illustrator artwork with the size & scale of the final image in mind. It's very difficult to get customers (who often have no formal graphic design training and are self taught at learning Illustrator & Photoshop) to think about the basic rule of designing for the output device -a graphics 101 sort of thing.
    Something like a large wall mural, especially one that is reproduced outdoors, can get by just fine with Illustrator artwork using raster effects settings of 72ppi or even less than that. Lots of users will have a 300ppi setting applied. 300ppi is fine for a printed page being viewed up close. 300ppi is sheer overkill for large format use.
    Mind you, Adobe Illustrator has a 227" X 227" maximum artboard size, so anything bigger than that will have to be designed at a reduced scale with appropriate raster effects settings applied. For example, I'll design a 14' X 48' billboard at a 1" = 1' scale. A 300ppi raster effects setting is OK in that regard. When blown up in a large format printing RIP the raster based effects have an effective 25ppi resolution, but that's good enough for a huge panel being viewed by speeding vehicles on city streets or busy highways.
    Outside of that, the render speed of vector-based artwork will depend on the complexity of artwork. One "gotcha" I tend to watch are objects with large numbers of anchor points (like anything above 4000 points). At a certain point the RIP might process only part of the object or completely disregard it.

  • How do large scale software companies (IBM, Micrsft, SAP) specify design?

    I am wondering if anyone can point me to a reliable source of information that shows how large scale software companies perform business when it comes to Software design specifications?
    Do they do a full design? Partial? No designs at all?
    Do the companies do a high and low level design, or split it into different phases/iterations?
    Do they use a proprietary format (Text + UML)? Straight UML? Text Only?
    Does anyone know of a source of information which describes these sort of informations?
    Thanks

    Most will have a multitude of "standards" and"processes" in use in different departments and for
    different projects.
    I agree with you, but is there information out there
    which points to what large companies tend to do during
    the design phase? On large scale projects (10,000+
    function points), it would be nearly impossible to
    approach this task without dividing and conquering
    (refer to Dr. Jenkings Extreme Software Cost
    Estimation, which says that an individual task of
    200,000+ lines of code can never be completed, or at
    least has never been done).
    Large scale projects exist without formal design. They usually arrive at that over time by incremental addition.
    So, there must design approaches used, otherwise
    these companies would never be finishing their
    projects. So I am trying to find a source that I can
    cite with information on specifically what design
    process, and modeling is being used.Just because there are no formal designs or the formal designs are not up to date does not mean there are no designs.
    The problem is not that designs do not exist, but rather that there is no way to communicate those designs to others. Formal, up to date designs, solve that problem.

Maybe you are looking for