RSI - Relative Strength Index - formula to PowerPivot

Hello All
We´re trying to add the RSI ( Relative Strength Index ) formula to  monthly data within PowerPivot.
Most of the work is done in the sample below:
Relative Strength Index Sample
But it complicates quickly when recursion is needed.
We already have the datamodel with the calendar, some sample data and the first working formulas in the sample.
The rest of the needed formulas are in the excel spreadsheet, but we need them within PowerPivot.
Can you please take a look and help?

Awesome :)
I´m having a hard time changing the sources though.
Trying to join the appended query with your last query.
1) Here´s the appended query:
let
Source = Table.Combine({MonthlyQuery,Daily})
in
Source
2) Here´s yours: TblAvgAll(2)
let
fnFirst14_ = (SYMBOL) =>
let
Source= Excel.Workbook(File.Contents("C:\Users\Imke\Documents\BI\Foren\Antworten\TN_RSI-On-
PowerpivotCleanDataSourcev3.xlsx")),
Data_Sheet = Source{[Item="Data",Kind="Sheet"]}[Data],
Header = Table.PromoteHeaders(Data_Sheet),
Filter = Table.SelectRows(Header, each ([SYMBOL] =SYMBOL)),
KeepFirst14 = Table.FirstN(Filter,14)
in
KeepFirst14,
//let
fnRSI_ = (AvgGain14, AvgLoss14, SYMBOL) =>
let
Source2= Excel.Workbook(File.Contents("C:\Users\Imke\Documents\BI\Foren\Antworten\TN_RSI-On-
PowerpivotCleanDataSourcev3.xlsx")),
Data_Sheet = Source2{[Item="Data",Kind="Sheet"]}[Data],
Header = Table.PromoteHeaders(Data_Sheet),
FilterSymbol = Table.SelectRows(Header, each ([SYMBOL] = SYMBOL)),
RemoveFirst13 = Table.Skip(FilterSymbol,13),
RemoveFirst14 = Table.Skip(FilterSymbol,14),
ListGain = List.Buffer(RemoveFirst14[Gain]),
ListLoss = List.Buffer(RemoveFirst14[Loss]),
ListDate = List.Buffer(RemoveFirst13[DATEA]),
Iterate = List.Generate(
()=>[Counter=0, Value_=AvgGain14],
each [Counter]<=List.Count(ListGain),
each [Counter=[Counter]+1,
Value_=([Value_]*13+ListGain{[Counter]})/14],
each [Value_]),
IterateLoss = List.Generate(
()=>[Counter=0, Value_L=AvgLoss14],
each [Counter]<=List.Count(ListLoss),
each [Counter=[Counter]+1,
Value_L=([Value_L]*13+ListGain{[Counter]})/14],
each [Value_L]),
Table = Table.FromColumns({Iterate, IterateLoss, ListDate})
in
Table,
Quelle = Excel.Workbook(File.Contents("C:\Users\Imke\Documents\BI\Foren\Antworten\TN_RSI-On-
PowerpivotCleanDataSourcev3.xlsx")),
Data_Sheet = Quelle{[Item="Data",Kind="Sheet"]}[Data],
#"Erste Zeile als Header" = Table.PromoteHeaders(Data_Sheet),
#"Entfernte Duplikate1" = Table.Distinct(#"Erste Zeile als Header", {"SYMBOL"}),
#"Andere entfernte Spalten" = Table.SelectColumns(#"Entfernte Duplikate1",{"SYMBOL"}),
#"Hinzugefügte benutzerdefinierte Spalte" = Table.AddColumn(#"Andere entfernte Spalten", "Function", each fnFirst14_
([SYMBOL])),
#"Function erweitern" = Table.ExpandTableColumn(#"Hinzugefügte benutzerdefinierte Spalte", "Function", {"Gain", "Loss"},
{"Gain", "Loss"}),
#"Gruppierte Zeilen" = Table.Group(#"Function erweitern", {"SYMBOL"}, {{"AvgGain14", each List.Average([Gain]), type
number}, {"AvgLoss14", each List.Average([Loss]), type number}}),
#"Hinzugefügte benutzerdefinierte Spalte1" = Table.AddColumn(#"Gruppierte Zeilen", "AvgGain", each fnRSI_([AvgGain14],
[AvgLoss14], [SYMBOL])),
#"AvgGain erweitern" = Table.ExpandTableColumn(#"Hinzugefügte benutzerdefinierte Spalte1", "AvgGain", {"Column1",
"Column2", "Column3"}, {"AvgGain", "AvgLoss", "DATEA"}),
#"Entfernte Spalten" = Table.RemoveColumns(#"AvgGain erweitern",{"AvgGain14", "AvgLoss14"})
in
#"Entfernte Spalten"
Getting all sorts of errors.

Similar Messages

  • Related Content on Formula Cells

    I am trying to create a report that has related content. I want to apply the related content to formula cells but that functionality is disabled on the formula cells in FR Studio. Is there a configuration or property that must be changed to get this to work?
    Thanks
    Ronnie

    Which version are you using?
    11.1.2.0 added the functionality to add Related Content links to be created in any part of a report - such as formula and text rows and columns, row and column headers, images, text boxes and even all of the cells in a grid.
    Regards, Iain

  • Extracting a distinct list of values using the Index formula

    In Xcelsius, I am trying to retrieve a distinct list of values from data imported using the Reporting Services button.
    In the spreadsheet I am using the following formula:
    =INDEX($A2:$A831,MATCH(0,COUNTIF($B$2:B2,$A2:$A831),0))
    The above formula works correctly in Xcelsius, but when I select the preview button the values change to #N/A.
    Please could you advise on why this does not work?
    Many thanks,
    Natalie

    Hi Natalie,
    First, you have to be aware of the fact that Xcelsius "simulates" an Excel function. When you are in design mode, the actual "Excel" (MS code) functions are executed. But when you are in preview mode (or export to a swf), all Excel functions are simulated (Xcelsius code).
    The fact that your function works in design mode but not in preview/export may point to a bug.
    But there are also certain assumptions (to address speed/efficiency) on the Xcelsius code which may cause the preview to fail. One such assumption is that on the VLOOKUP function, Xcelsius does not recalculate formulas in the index column of the VLOOKUP table array - if the index column contains formulas, the index column will always remain in the initial state (will not dynamically recalculate).
    Also, not all features on a supported Excel function works. For example, array formulas are not supported.
    Bobby

  • A question related to index

    Hi,
    I have the following question about indexes:
    In our datamodel, we have several columns that are defined as nullable - they contain null data. And yet on these column there is an index defined. Does that help any? - The index are single index index not composite column index. What I understand is that: if a column contains null values then oracle can't use index on it and so the index would not serve any purpose. Is that right?
    Thanks,

    "In my situation we have single key index on a column and that column is nullable. So my understanding is that these single key index on a nullable column does not add any value at all. Is that correct? "
    Yes, as http://oradbatips.blogspot.com/2006/11/tip-8-index-for-null-columns.html the link is saying that if a column which is having null values, will not be in index, i.e. "Index does not store NULL value". So, if whenever you select * from table where column is null; it will go for full table scan... not index scan.. because there is no null in index.
    Regards
    Girish Sharma

  • Relative References in formulas Hyp Fin Reporting

    How do I create a relative row reference with hfr functions, specifically text functions.  I know "cur" or "current" will give you the relative reference of the current row or column but am trying to figure out how to reference a row above.  Can I use "Cur -1" or something similar? 
    I am going line-by-line using text functions at the moment:
    Text Row Header
    <<MemberDescription(Current, 1, "Entity")>>
    <<MemberDescription(Current, 2, "Entity")>>
    <<MemberDescription(Current, 3, "Entity")>>
    <<MemberDescription(Current, 4, "Entity")>>
    etc...
    Many thanks in advance

    you can use IsMissing function to achieve the same. Just have a try, hopefully this should work.
    Regards,
    Rahul

  • Any relation between indexes and high cardinality.............

    HI,
      what is difference b/w index and high cardinality?
       and also difference b/w index and line item dimension?
    Thanks

    Hi,
    High Cardinality:
    Please Refer this link, especially the post from PB:
    line item dimension and high cardinality?
    Line Item Dimension:
    Please go through this link from SAP help for line item dimension
    http://help.sap.com/saphelp_nw04/helpdata/en/a7/d50f395fc8cb7fe10000000a11402f/content.htm
    Also in this thread the topic has been discussed
    Re: Line Item Dimension
    BI Index:
    There are two types of indexes in BW on Oracle, bitmap and b-tree
    Bitmap indexes are created by default on each dimension column of a fact table
    Setting the high cardinality flag for dimension usually affects query performance if the dimension is used in a query.
    You can change the bitmap index on the fact table dimension column to a b-tree index by setting the high cardinality flag. For this purpose it is not necessary
    to delete the data from the InfoCube
    Refer:
    Re: Bitmap vs BTree
    How to create B-Tree and Bitmap index in SAP
    Re: Cardinality
    Line Item Dimesnion
    Hope it helps...
    Cheers,
    Habeeb

  • Indexing AGAIN?!

    I have several questions about my Time Capsule.
    Why does my backup file have to be indexed SO often? It seems like every other week it embarks upon an epic "96 hour remaining" index. The Spotlight Magnifying glass has the pulsing dot in the middle, and when I pull down on it it says, "Indexing Backup of David Benjamin's MBP." I understand indexing, but don't understand why it's doing an entire, massive indexing project for literally the third time since I've bought it. I recently set up a second AEBS on the network for increased strength and range, so now I guess I have a WDS network, but my computer can still find the Time Capsule, so there should be no problem.
    I like indexing, it makes spotlight amazing. However, does my backup have to be indexed? It seems silly.
    I would appreciate and advice or help. Thanks!
    David

    I'm certainly curious about this topic.
    I don't know if I can contribute anything more than some observations.
    I've never observed indexing of my backups on the Time Capsule. Having said that I turn off the TC every evening.
    I looked into the system.log and found only 1 instance related to indexing my backup - a failure:
    Jun 6 18:21:46 Homer /System/Library/CoreServices/backupd[1432]: Indexing a file failed. Returned -1120 for: /Library/Caches, /Volumes/Backup of Homer/Backups.backupdb/Homer/2008-06-06-170749.inProgress/4AFE8458-4A75-43E1-8A 4E-F685DF7D83A8/Macintosh HD/Library/Caches
    Jun 6 18:21:46 Homer /System/Library/CoreServices/backupd[1432]: Aborting backup because indexing a file failed.
    If I make a Spotlight search for a file that only exists in the backup and no longer on a hard drive - I get no hits, suggesting that there is no "index" spanning the hard drives and the backups in my setup.
    Further if I enter Time Machine and search for a file in the present (i.e. Now) that only exists in earlier times I get no hits. Indicating, to me, the there is no indexing spanning the multiple time periods of the backups within Time Machine.
    If however, if I navigate to a time period where the file exists, Spotlight will find it. Indicating, perhaps, that the spotlight index meta data is being backed up.
    As the backups are unmounted at the completion of the hourly backups, Spotlight is not going to find the backup file to index unless either: it just happens to run while the file is mounted; or it is able to "know" about the backup file and force mounting it on its own initiative...

  • Using a byte[] as a secondary index's key within the Collection's API

    I am using JE 4.1.7 and its Collections API. Overall I am very satisfied with the ease of using JE within our applications. (I need to know more about maintenance, however!) My problem is that I wanted a secondary index with a byte[] key. The key contains the 16 bytes of an MD5 hash. However, while the code compiles without error when it runs JE tell me
    Exception in thread "main" java.lang.IllegalArgumentException: ONE_TO_ONE and MANY_TO_ONE keys must not have an array or Collection type: example.MyRecord.hash
    See test code below. I read the docs again and found that the only "complex" formats that are acceptable are String and BigInteger. For now I am using String instead of byte[] but I would much rather use the smaller byte[]. Is it possible to trick JE into using the byte[]? (Which we know it is using internally.)
    -- Andrew
    package example;
    import com.sleepycat.je.Environment;
    import com.sleepycat.je.EnvironmentConfig;
    import com.sleepycat.persist.EntityStore;
    import com.sleepycat.persist.PrimaryIndex;
    import com.sleepycat.persist.SecondaryIndex;
    import com.sleepycat.persist.StoreConfig;
    import com.sleepycat.persist.model.Entity;
    import com.sleepycat.persist.model.PrimaryKey;
    import com.sleepycat.persist.model.Relationship;
    import com.sleepycat.persist.model.SecondaryKey;
    import java.io.File;
    @Entity
    public class MyRecord {
    @PrimaryKey
    private long id;
    @SecondaryKey(relate = Relationship.ONE_TO_ONE, name = "byHash")
    private byte[] hash;
    public static MyRecord create(long id, byte[] hash) {
    MyRecord r = new MyRecord();
    r.id = id;
    r.hash = hash;
    return r;
    public long getId() {
    return id;
    public byte[] getHash() {
    return hash;
    public static void main( String[] args ) throws Exception {
    File directory = new File( args[0] );
    EnvironmentConfig environmentConfig = new EnvironmentConfig();
    environmentConfig.setTransactional(false);
    environmentConfig.setAllowCreate(true);
    environmentConfig.setReadOnly(false);
    StoreConfig storeConfig = new StoreConfig();
    storeConfig.setTransactional(false);
    storeConfig.setAllowCreate(true);
    storeConfig.setReadOnly(false);
    Environment environment = new Environment(directory, environmentConfig);
    EntityStore myRecordEntityStore = new EntityStore(environment, "my-record", storeConfig);
    PrimaryIndex<Long, MyRecord> idToMyRecordIndex = myRecordEntityStore.getPrimaryIndex(Long.class, MyRecord.class);
    SecondaryIndex<byte[], Long, MyRecord> hashToMyRecordIndex = myRecordEntityStore.getSecondaryIndex(idToMyRecordIndex, byte[].class, "byHash");
    // END

    We have highly variable length data that we wish to use as keys. To avoid massive index sizes and slow key lookup we are using MD5 hashes (or something more collision resistant should we need it). (Note that I am making assumptions about key size and its relation to index size that may well inaccurate.)Thanks for explaining, that makes sense.
    It would be the whole field. (I did consider using my own key data design using the @Persistent and @KeyField annotations to place the MD5 hash into two longs. I abandoned that effort because I assumed (again) that lookup with a custom key design would slower than the built-in String key implementation.)A composite key class with several long or int fields will not be slower than a single String field, and will probably result in a smaller key since the UTF-8 encoding is avoided. Since the byte array is fixed size (I didn't realize that earlier), this is the best approach.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Beginners guide to PowerPivot data models

    Hi,
    I've been using PowerPivot for a little while now but have finally given into the fact that my lack of knowledge about data modelling is causing me all kinds of problems.
    I'm looking for recommendations on where I should start learning about data modelling for Powerpivot (and other software e.g. Tablea, Chartio etc). By data modelling I mean how I should best organise all the data that I want to analyse which is coming fomr
    multiple sources. In my case my primary sources right now are:
    Our main MySQL database
    Google Analytics Data
    Google Adwords data
    MailChimp data
    Various excels
    I have bought two books - "Dax Formulas for PowerPivot" which is great but sparse on data modelling information and "Microsoft Excel 2013 - Building Data Models with PowerPivot" which looks excellent but starts of at I believe too advanced
    a level.
    Where should a beginner with no experience of data modelling, but intermediate/advanced experience of Excel go to learn skills for PowerPivot Data modelling?
    By far the main issues is that our MySQL databases are expansive and include hundreds of tables across multiple databases and we need to be able to utilise data from all of them. I imagine that I somehow need to come up with a intermediary layer between
    the Databases and Powerpivot which extracts and flattens the main data into fewer more important tables, but would have no idea how to do this.
    Also to be clear, I am not looking at ways of modelling the MySQL database itself - our developers are happy with the database relationships etc, it just the modelling of that data within PowerPivot and how to best import that data.
    Recommendations would be absolutely brilliant, its a fantastic product but right now I'm struggling to make the most of it.

    Thanks for the recommendations, I am aware of the last two of those and
    http://www.powerpivotpro.com/ in particular has proved very useful (TechNet less so). 
    I will take a look at SQLBI in more detail but from a very casual browse it seems like this too is targeted more at experienced users. There paid courses may definitely prove useful though.
    I think what I'm getting at is that there are probably an increasing number of people like myself who have fallen into PowerPivot without a traditional background in databases and data modelling. In my case I have a small business of
    15 employees and we were using Excel and PivotTables to do some basic analysis before soon discovering that our data was too complicated and that I needed something. PowerPivot definitely seems to solve that issue and I'm having much
    better success now than I was without. I also feel quite competent with DAX and actually building tables from the PowerPivot data model.
    What I'm lacking in is the very first step of cleaning and preparing raw data for import and then importing it into Powerpivot and setting up a efficient model. I have to be honest that your links above did bring
    PowerQuery to my attention and it seems like a brilliant tool and one of the missing links. I would however still like to see a beginners guide to data import and model set-up as I don't think I've yet come across one either in book or
    online form which explains the fundamentals well.
     

  • Secondary index not getting picked

    Hello All,
    I am seeing stange behaviour of picking of secondary indexes.
    Example:
    Index - I1 is having two fields and the same two fields in giving in where clause of the select and this fields are unque and not used in any other secondary index.
    Result in trace(ST05) -- Index I1 is not picked and it extraction went for 'Full table scan'.
    But in other system for the same inputs index I1 is picked and it can be seen trace(ST05).
    Before posting this query, i have gone through many posts related secondary index and not found helpful.
    Any inputs will be appreciated.
    Thanks.
    Adnan.

    Hi,
    In the select query have you Called the Secondary Index in the WHERE clause.
    Please try with this option it will surely work.
    More Information About Index:
    Inclusions Index: The index is only created automatically during activation for the database systems specified in the list. The index is not created on the database for the other database systems.
    Exclusions Index: The index is not created automatically on the database during activation for the specified database systems. The index is automatically created on the database for the other database systems.
    Thanks & Regards,
    Saravanan Sambandam

  • Bug in 10.2.0.5 for query using index on trunc(date)

    Hi,
    We recently upgraded from 10.2.0.3 to 10.2.0.5 (enterprise edition, 64bit linux). This resulted in some strange behaviour, which I think could be the result of a bug.
    In 10.2.0.5, after running the script below the final select statement will give different results for TRUNC(b) and TRUNC (b + 0). Running the same script in 10.2.0.3 the select statement returns correct results.
    BTW: it is related to index usage, because skipping the "CREATE INDEX" statement leads to correct results for the select.
    Can somebody please confirm this bug?
    Regards,
    Henk Enting
    -- test script:
    DROP TABLE test_table;
    CREATE TABLE test_table(a integer, b date);
    CREATE INDEX test_trunc_ind ON test_table(trunc(b));
    BEGIN
    FOR i IN 1..100 LOOP
    INSERT INTO test_table(a,b) VALUES(i, sysdate -i);
    END LOOP;
    END;
    SELECT *
    FROM (
    SELECT DISTINCT
    trunc(b)
    , trunc(b + 0)
    FROM test_table
    Results on 10.2.0.3:
    TRUNC(B) TRUNC(B+0)
    05-08-2010 05-08-2010
    04-08-2010 04-08-2010
    01-08-2010 01-08-2010
    30-07-2010 30-07-2010
    28-07-2010 28-07-2010
    27-07-2010 27-07-2010
    23-07-2010 23-07-2010
    22-07-2010 22-07-2010
    17-07-2010 17-07-2010
    03-07-2010 03-07-2010
    26-06-2010 26-06-2010
    etc.
    Results on 10.2.0.5:
    TRUNC(B) TRUNC(B+0)
    04-05-2010 03-08-2010
    04-05-2010 31-07-2010
    04-05-2010 24-07-2010
    04-05-2010 06-07-2010
    04-05-2010 05-07-2010
    04-05-2010 01-07-2010
    04-05-2010 16-06-2010
    04-05-2010 14-06-2010
    04-05-2010 08-06-2010
    04-05-2010 07-06-2010
    04-05-2010 30-05-2010
    etc.

    Thanks four your reply.
    I already looked at the metalink doc. It lists 4 bugs introduced in 10.2.0.5, but none of them seems related to my problem. Did I overlook something?
    Regards,
    Henk

  • Index Tuning Wizard in Enterprise Manager 10g Grid Control

    Hi,
    9i OEM has an Index Tuning Wizard. Does anyone know where is it located in 10g Enterprise Manager?
    Thanks!
    -Ranit.

    10g has an SQL Access Advisor (for tuning related to indexes and Materialized views) that is accessible via the OEM GC Advisor Central Link.
    Look to DB Targets - Select the DB - Select Advisor Central from the Related Links Section at the bottom - Select SQL Access Advisor.
    HTH,
    Tony

  • Where to declare report fileld formula in BI Publisher

    Hi All,
    I am new to BI Publisher..I have a question related to defining formula..I have to calculate a report field like A /A-B in report .. where we can declare our column formula in BI publsiher?Plz help.
    Regards,
    Sonal
    Edited by: Sonal on Dec 19, 2011 6:11 AM

    I cannot because it depends on your requirements. You need to learn xml and xpath then you can start creating bip reports.
    If you are using OBIEE then another approach would be to create those metrics in RPD or Answers and then pull them into BIP.
    If you do not understand what I am saying then you are far from doing it the right way, better ask somebody to do it for you
    while you learn all the process.
    regards
    Jorge

  • Bitmap Index Question

    I have a star schema database setup with a bunch of definition tables with 2-10 values in each. Most things I read say to use bitmap only in data warehousing, but in the same breath talk about tables exactly like I have them set up. So my question is, do bitmap indexes ever have a use outside of data warehouse? We don't do millions of transactions on hr, but it is an asset management front end run using php. So the main data tables is getting updated, inserted, and deleted during the day. I'd say on average we have about 30 users at any given time performing actions on the tables or pulling reports.
    On side note, but still related to indexes, is it better to have the indexes stored in different tablespace from the tables? If so, what is its effect?
    Thanks in advance.
    Setup:
    Oracle 11g running on Ubuntu 9.10 64bit

    ChaosAD wrote:
    I have a star schema database setup with a bunch of definition tables with 2-10 values in each. Most things I read say to use bitmap only in data warehousing, but in the same breath talk about tables exactly like I have them set up. So my question is, do bitmap indexes ever have a use outside of data warehouse? We don't do millions of transactions on hr, but it is an asset management front end run using php. So the main data tables is getting updated, inserted, and deleted during the day. I'd say on average we have about 30 users at any given time performing actions on the tables or pulling reports.
    Having STAR schema design for a transactional processing application seems bit strange (but what do I know...).
    Have you verified/validated that you definitely need bitmap indexes and B*Tree indexes will not serve the purpose? Just because it is a STAR schema does not necessarily mean one has to have bitmap indexes.
    If you expect 30 users (on average) to concurrently modify the data, I believe bitmap indexes is not the right choice as the DML actions will suffer from contention. Bitmap indexes negatively affect the concurrent multiple transactions.
    Rafi has answered your second question.

  • How to know if Index is used by a program or job

    Hi,
    I am having this problem on how to check if a specific index such as /BIC/B0000** is used by a job or program. This is because I am woking on the Oracle Cost-Based Optimizer to be updated in a weekly schedule. But before a change in the schedule is done for the optimizer and checking through index, there was an index in an UNUSABLE STATE.
    SQL> select INDEX_NAME,INDEX_TYPE,STATUS from dba_indexes where index_name like '/BIC/B0000881001KE';
    INDEX_NAME                     INDEX_TYPE                  STATUS
    /BIC/B0000881001KE             NORMAL                      UNUSABLE
    So I cannot proceed because I need to know if this index is used by a program. Can you help me on this?

    Hi
    Take a look at SAP Note 184905 - Collective note on performance Several notes related to indexing can be found in the same

Maybe you are looking for

  • How to install the oracle lite mobile server?

    hi,every one! I follow the wizard of the oracle lite mobile server,then pop the window let me input the "host name""port"and "net server name",I have install oracle 9i,and establish a database named "oradb",establish an server named"oradb",the servic

  • Logos look fuzzy in Studio Pro but NOT in the original QT movie

    I assembled a flying logo (Photoshop) over video in Adobe After Effects and exported it as a Quicktime movie. The Logo looks anti-aliased and great in both AE and the Quicktime movie. But when I look at it in DVD Studio Pro it looks horrible (jagged

  • Differentiation of an analogue signal

    Hi all, I am using Labview 8.5 along with a USB 6008 DAQ to acquire an input signal from an LVDT. The output from the LVDT is 0 - 10V which corresponds to a total displacement of 50mm. My job is to acquire the input signal(displacement) and different

  • Image file from Word

    How do I copy a picture from a Word document and save it as an image file?

  • Best software to use to delete useless/unused programs?

    Or do I just do it manually? I'm new here, so as much help as possible would be appreciated. I want to reduce the memory in my Mac.