What are your preferred word processing and drawing apps?

I currently use LibreOffice for most purposes, because it has both word processing and drawing software built-in, because it can read many file formats, without as much trouble as most other office suites. Unfortunately, while it can import AppleWorks word-processing documents, it can't import drawings, spreadsheets, etc., and while it can import older Word word-processing documents, it tends to corrupt the tables, the fancier quotes, etc. Also unfortunately, it does not work well with speech-to-text/dictation, not that dictation works well. Also unfortunately, it sometimes screws up imported images.
I try to minimize formatting to avoid that in the future.
I used to use Appleworks for most purposes, but it isn't supported anymore.
I tried TextEdit, but it doesn't allow me to save different versions of the same document, or a new document based on an old document, it screws with page margins, and it doesn't have drawing software. I used to use Word, and at one point used Pages, but there are all sorts of issues with their file formats and incompatibilities between different versions.
I would like a functional but not excessively fancy word-processing and drawing tool, and either as part of that, or in addition to that, a suitable version comparison tool. I draw a lot of maps, so the drawing program has to respect scale, and I have coordination problems, so it has to make it easy to put the right object in the right place. I've had trouble with that in LibreOffice.
So in general, what word processing and office suite software do you think works for your needs, or would work for my or other people's needs?

http://itunes.apple.com/ca/app/pages/id361309726?mt=8
http://itunes.apple.com/us/app/numbers/id361304891?mt=8
http://itunes.apple.com/ca/app/documents-to-go-premium-office/id317107309?mt=8

Similar Messages

  • What is your preferred import format and why?

    Just curious as to what people like to use for importing information into PowerShell. It obviously depends on the format of the information, like lists, single text fields and lists of lists.
    [email protected]

    Thanks, Fred, this is actually for a new user script. I recently migrated our old new user script from VB to Powershell and things seem to be working ok with it, but it's time to expand it a bit and do automatic group provisioning. Each department is going
    to send me a list of AD groups each new user should be added to for file/folder access and security and I was wondering the best format for those lists to be in. I was personally thinking have each list be a separate "text" file with each group on
    a separate line and just import them as CSVs. What do you think?
    Thanks.
    [email protected]
    That is not a CSV.  THe usual and easiest method is to have thembuild an Excel Workbook with users listed by group:
    Example:
    UserID, Group
    jsmith, users1
    jsmith, users2
    ljones, users1
    ljones,users4
    This would be easiest to loop through and should be easy to build.  THis would be a SCSV or can be read directly out of an Excel WB.
    If you want simple text then forget CSV
    Use this:
    jsmith, group, group2, group3
    Now read oneline
    and split
    Get-Content usergroups.txt |
          $userID,$groups = $_ -split ','
          # process user
    Now you have the userid and an array of groups.
    \_(ツ)_/

  • What are the best programs for word processing and excel for the iPad

    Was planning to use my iPad for word processing and excel work, what programs are best that would be compatible with Word documents I have on a Wi-Fi drive.

    http://itunes.apple.com/ca/app/pages/id361309726?mt=8
    http://itunes.apple.com/us/app/numbers/id361304891?mt=8
    http://itunes.apple.com/ca/app/documents-to-go-premium-office/id317107309?mt=8

  • What's happening when Word opens and saves a document?

    Hello,
    I got very unspecific question, but I don't know where to start. I have an open xml document which gets generated by a third party tool. If I open the document using open xml sdk it looks VERY different from how it looks like when I opened and saved it before
    using Word 2013.
    I'm trying to get an understanding on whats Word doing! From what I've seen it's doing some remodelling on the xml file, since it looks very different (other and way more descendants than before) and I would like to achieve the same thing using open xml.
    Although this might be already to much information: The generated word documents contains html code with links to pictures. After Word has touched the file the picture is inside a drawing element, before it's not but in Word both versions look the same.
    I'm thankful for any help you can give me.
    Cheers Andreas
    Regards Andreas MCPD SharePoint 2010. Please remember to mark your question as "answered"/"Vote helpful" if this solves/helps your problem.

    Hi Cheers Andreas,
    >> If I open the document using open xml SDK it looks VERY different from how it looks like when I opened and saved it before using Word 201
    >> What's happening when Word opens and saves a document?
    This forum is discussing about Open XML developing. This is not exactly an OpenXML SDK questions. To be frank, I do not know exactly what Word opens and saves a documents either (this is the implementation details of the product).
    If you are interested on it, the forum below might be more appropriate for you:
    https://social.msdn.microsoft.com/Forums/en-US/home?forum=os_binaryfile
    >> From what I've seen it's doing some remodelling on the xml file, since it looks very different (other and way more descendants than before) and I would like to achieve the same thing using open xml.
    What do you mean by “other and way more descendants”? Do you mean that you want to add new nodes in the xml document? In my option, different objects in word have different nodes. You could refer the link below for more information about Word processing.
    https://msdn.microsoft.com/EN-US/library/office/cc850833.aspx
    If you have issues about OpenXML SDK, since it is a new issue which is different from your original issue, I suggest you post a new thread for the new issue in this forum. Then there would be more community members to discuss the question.
    Best Regards,
    Edward
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • What are the settings for datasource and infopackage for flat file loading

    hI
    Im trying to load the data from flat file to DSO . can anyone tel me what are the settings for datasource and infopackage for flat file loading .
    pls let me know
    regards
    kumar

    Loading of transaction data in BI 7.0:step by step guide on how to load data from a flatfile into the BI 7 system
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.

  • What are your impressions of "multi-tasking"?

    If you have iOS4 and a capable device, you should have multi-tasking and opened apps appearing in the task bar. Newer app versions are able to run in the background. In my opinion, when I close most apps (by pressing the home button), I want them to close completely, not run in the background. Aside from being a privacy issue, apps in the task bar may use battery power or if truly in a suspended mode, they still take up memory or process capability. In order to really shut them down, two additional home button clicks and then two more screen strokes are required. Not very efficient and probably leading to an earlier home button failure. Why not have some kind of screen command (tap or swipe combination?) to simultaneously shut down all apps in the task bar? The bar itself is useless if you have used many apps in the course of a day. To find what you're looking for, you have to scroll through a long parade of icons. It's a lot easier to just tap the icon where you know it is sitting in the nice folder you created. I don't get it. I realize the bar can be used for switching open apps, but this is really not that big of a deal for the vast majority of apps. Furthermore, there should be an option to enable or disable multi-tasking globally as well as for individual apps. Now that would be an improvement.
    What are your thoughts?

    I havent done any kind of multitasking on my ipod touch, im waiting to upgrade my software to the newer one. However, I couldnt agree more with that of pressing the home button many times to perform basic function to switch apps or enable the multitasking ability. I think palm adapted a better design of software than apple ever did with their IOS 4. Palm's webOS can handle full multitasking--something the iPhone can't do. Palm uses what it calls "a deck of cards model" for managing multitasking: You can view each of your open applications at once, shuffle them any way you choose, and then discard the ones you want to close. All of this is done with intuitive gestures that mimic handling a physical deck of cards. Apps remains live, even when minimized into the card view, so changes can continue to happen in real-time, even if you've moved on to another activity.
    I had the time to experience a bit of a palm web os software on an AT&T store and my impression on that software is done more elegantly than apple multitasking home pressing button system.
    P.S to NYtroutbum: you should definitely present that idea to apple by its feedback product page. Let's hope it listens.

  • What is the best word processing program for mac?

    What is the best word processing program for Mac?

    That's an impossible question to answer - what type of writing will you be doing?
    Before we can point you in a direction you need to tell us what kind of writing you will be doing. As phrased, the question is a bit like "what's the best car?" Well few would doubt that a Ferrari is superior to a Honda minivan, but the Ferrari will not much use if you're dragging 4 kids, two dogs and the grandparents along.
    Academic?
    Word is the nearest thing to a standard format in the Humanties, but in the sciences and math you'll find Tex more useful for laying out formulas. The older Pages is quite good as a substitute for Word in many cases in the Humanties, but you'll have big problems with citations in the newer version. Apps like Nisus Writer Pro and Mellel are more powerful than Pages, but each have shortcomings. Scrivener and Ulysses are excellent drafting tools but you'll need a Word Processor for final layout. There are lots and lots of lightweight editors out there - Byword, iaWriter (and Writer Pro), OmmWriter - that have little functional use in Academic settings - poor or no support for citations for instance, or poor or no interoperability with Word - but which might be excellent for other uses.
    So, back to the key question: what kind of writing will you be doing?

  • What is the best word processing program for ipad2?

    What is the best word processing program for ipad2?

    I would recommend using Pages for what you are doing. It is the best one to use on the iPad itself, and has the ability to convert to an MS Word format. I know that it can do this through email, but I am also fairly certain you can import it in this format through iTunes, though you may need to convert it on the iPad itself.
    With iOS 5 and iCloud, I recommend kicking Word to the curb altogether and going with Pages on both machines. Not only will everything be more seamless, but iCloud will update the changes you make, on either device, automatically. And Pages is only 20 bucks through the App store for the Mac (OS 10.6.7 or later needed).

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • Looking for new laptop what are the differences between pro and air? Besides size. Does the air preform like the pro?

    Looking for new laptop what are the differences between pro and air? Besides size. Does the air preform like the pro?

    The NEW macbook Pro and Air are EXTREMELY close in form factor
    The newest macbook Pro is essentially a larger macbook Air with Retina display and options for speed in increasing prices up to an independent graphics and quad core processor.
    both Air and new Pro now have PCIe SSD and permanent RAM.
    The Air is the lightweight portable form factor, fast to boot and shut down, but with longer battery life than any of the macbook pro in 13"
    Now the new macbook Pro and macbook Air are extremely close in form factor and nature.
    both have 802ac wifi
    both have permanent RAM, no superdrive
    both are slim profiles and SSD
    The only real differences now are (in the most expensive Pros) faster processors and quadcore processors and top end model autonomous graphics.
    ....and of course the retina display
    both are now "very good for travel"
    Other than features the form factor of the Air and Pro are VERY close now,....so now its merely a matter of features and price more than anything.
    You need an external HD regardless of what you get for backups etc.   Drop into an Apple store and handle both and make your choice based on features, such as Retina or non-retina, .... both at a distance now look like the same computer.
    The Pro weighs more, ....but nowhere near what it used to just a month ago on the older macbook Pros
    The NEW macbook Pro is a different creature entirely than the older macbook Pro, .....the new Pro is thicker than the Air, but id frankly call the NEWEST Pro a "macbook Air with Retina display" , or
    Maybe a “macbook Air PRO with Retina display” 
    Instead of Air VS Pro now,.....its really a smooth transition from Air to pro without comparing say, 2 different creatures, now its like contrasting a horse from a race horse.
    Either one in 8gig of RAM (preferably)... the 4gig upgrade costs very little,  the I7 you will notice only 15% faster on heavy applications over the I5, and NOTHING on most APPS.....I5 has longer battery life.
    As you see below, the non-Retina 13" AIR is 82% of the Macbook with Retina display in resolution
    there is no magical number of pixels per inch that automatically equates to Retina quality.
    http://www.cultofmac.com/168509/why-you-might-be-disappointed-by-the-resolution- of-those-new-retina-display-macs-feature/
    A huge internal SSD isnt a game changer for anything, you need an external HD anyway
    what you WONT READ on Apple.com etc. is that the larger SSD  are MUCH FASTER due to SSD density
    "The 512GB Samsung SSD found in our 13-inch model offers roughly a 400MB/s increase in write speeds over the 128GB SanDisk/Marvell SSD"
    http://blog.macsales.com/19008-performance-testing-not-all-2013-macbook-air-ssds -are-the-same
    Here is an excellent video comparison between the 11” I5 vs. I7 2013 Macbook Air.
    http://www.youtube.com/watch?v=oDqJ-on03z4
    http://www.anandtech.com/show/7113/2013-macbook-air-core-i5-4250u-vs-core-i7-465 0u/2
    I5 vs. I7 performance 13” Macbook Air 2013
    Boot performance
    11.7 I5 ……11.4 I7
      Cinebench 
    1.1 I5….1.41 I7
    IMovie Import and Opt.
    6.69 I5….5.35 I7
      IMovie Export 
    10.33 I5…8.20 I7
    Final Cut Pro X
    21.47 I5…17.71 I7
      Adobe Lightroom 3 Export 
    25.8 I5….31.8 I7
    Adobe Photoshop CS5 Performance
    27.3 I5…22.6 I7
    Reviews of the newest Retina 2013 Macbook Pro
    13”
    Digital Trends (13") - http://www.digitaltrends.com/laptop-...h-2013-review/
    LaptopMag (13") - http://www.laptopmag.com/reviews/lap...play-2013.aspx
    Engadget (13") - http://www.engadget.com/2013/10/29/m...-13-inch-2013/
    The Verge (13") - http://www.theverge.com/2013/10/30/5...ay-review-2013
    CNet (13") - http://www.cnet.com/laptops/apple-ma...-35831098.html
    15”
    The Verge (15") - http://www.theverge.com/2013/10/24/5...w-15-inch-2013
    LaptopMag (15") - http://www.laptopmag.com/reviews/lap...inch-2013.aspx
    TechCrunch (15") - http://techcrunch.com/2013/10/25/lat...ok-pro-review/
    CNet (15") - http://www.cnet.com/apple-macbook-pro-with-retina-2013/
    PC Mag (15") - http://www.pcmag.com/article2/0,2817,2426359,00.asp
    Arstechnica (15") - http://arstechnica.com/apple/2013/10...-pro-reviewed/
    Slashgear (15") - http://www.slashgear.com/macbook-pro...2013-26303163/

  • G10 Aperture RAW conversion: what are your impressions?

    the wait is over!
    2.4 RAW Compatibility update includes Canon G10.
    what are your impressions?
    what Aperture settings yield best results?
    how do they compare to Camera RAW and DPP?

    When you compare photos that were photographed at ISO 100 they all do a good job. When you stat going up in the ISO is where I think Aperture does a great job. I've attached a screen capture of the same photos processed with 3 different applications. No adjustments were added to the photos. The default settings were used then the photo was passed on to photoshop as a Tiff. I think it's clear why I don't like ACR. Aperture and DPP are much closer. DPP has some noise reduction on by default so the photo looks like it has less noise then Aperture. I feel that the default noise reduction just makes the photo look a little soft and out of focus. If I turn off the default noise reduction in DPP the photo looks noisy. So I like Aperture better because of the way the noise looks, the sharpness and detail of the photo. Another area to look at is the neck and chest area. Aperture holds the most amount of detail before blowing out. I know that all 3 programs have adjustments that will help fix the problems in the photo. Even after doing that to the photo in all 3 programs I still felt that Aperture was clearly better. As the previous poster said it is subjective to each persons taste.
    I've never used this way of posting a screen grab so if it doesn't work forgive me. Make sure to click on the photo to view the large file.

  • What are the differences between Logos and LogosXT?

    What are the differences between Logos and LogosXT?

     Logos XT is a networking middle-layer maintained by the LabVIEW Network Technologies and Security group. Logos XT provides a thin layer on top of TCP/IP to simplify some common network tasks.
    The underlying foundation for NI networking is called Logos.
    I believe that the basic idea is Logos is what is going on behind the scenes at the base level and Logos XT lets you build your own networking protocols on top of Logos.  Logos XT would be used if you want to make your own networking protocol instead of using TCP/IP or UDP.
    Scott A
    SSP Product Manager
    National Instruments

  • What is a good word processing program?

    Can someone suggest a good word processing app for Mac? 

    You can purchase Office 2011 - Mactopia - if you want the best. Or you can try the freeware suite, Libre Office, that is functionally similar to Office 2007 for Windows except it works on Lion.
    You may want to consider as well:
    TextEdit is included with OS X. It is not a high level word processor but it may be adequate.
    A good free alternative to TextEdit with more features is Bean 3.2.5.
    These two suites are similar to Libre Office but not as current or as well-supported:
    NeoOffice
    Open Office
    And, then there is Apple's iWork suite:
    Pages - word processing and layout
    Keynote - presentation
    Numbers - spreadsheet
    Each can open and save Office compatible files. They may be purchased separately via the Mac App Store for $19.99 each.
    (Access to the Mac App Store requires Snow Leopard 10.6.6 or higher and an Apple ID.)

  • What are your top 3 favorite AIR Native Extensions? (any OS)

    What are your top 3 favorite AIR Native Extensions?
    OS is irrelevant.

    From looking at most of the threads posted in these forums, people generally come here seeking help with issues they are having while developing an AIR application, hence the name of this community, "Adobe AIR Development". If your question isnt answered within 3 days that doesn't appear to be seeking help with an issue or potential bug, doesn't mean that AIR is dead. People are using this forum everyday as indicated by there being new or active threads everyday. Most questions that people answer or have interest in, are related to iOS and Android development.

  • What are some of the process chain errors one comes across?

    hi friends
    What are some of the process chain errors one comes across?

    Hello Kiran,
    Here are some of the errors that come across...
    -> Rollup not possible, no filled aggregates available
    -> Error when starting the extraction program
    -> The process step locked by another changerun
    -> Info objet does not contain alpha confirming values
    -> Errors due to non activated objects
    -> Errors due to duplicate records
    -> Errors due to locking
    -> Problems because of connection problem between source and bw systems
    -> Problems due to RFC
    Hope this helps you
    Cheers
    SRS

Maybe you are looking for