Recommendations for a content marketing vendor?

I'm looking for vendor recommendations to help us with persona creation, journey mapping, content audit and gap analysis. I have one promising company I've been chatting with and looking for 1-2 others to compare. Any fellow Eloquians out there have a recommendation?
Thanks!

I second this! There's are a couple of content development discussions that sound like what you're looking for...
Content Mapping Along the Buyer's Journey
According to the Content Marketing Institute, content marketing is defined as the art of communicating with your customers and prospects without selling. In this discussion, you will understand how different types of content are used along the buyer's journey. Using this knowledge, you will document your own buyer's journey stages and begin to map out your brand/product story to align. Finally, you will develop a content map for a single element of your story at a single stage of the buyer's journey, and identify your required content assets and gaps.
Getting Started with Content Development
Over half of B2B and B2C marketers cite producing enough content as one of their top challenges. To overcome this, you can capitalize on your content development strengths and create a solid plan for production. In this discussion, you will identify your organization's current content development competencies. You will then determine how to organize your content needs by level of effort required to produce each type of content. Finally, you will brainstorm how to schedule content development so that you produce a steady stream of content according to a content calendar. You will walk away with a project plan to help you get started developing the content according to this new approach.
Couldn't hurt to give one of these a try. At least they won't cost you anything.
You can request one here.

Similar Messages

  • Strategy/Recommendations for SLD content

    Hi All,
    lets take below business case of a project:
    there are 3 non-SAP applications.
    nonSapApplOne_Java
    nonSapApplTwo_DotNet
    nonSapApplThree_CustomLegacy
    there are 2 SAP applications.
    sapApplFour_ECC6_Module_HCM
    sapApplFour_ECC6_Module_FICO
    sapApplFive_R3
    I would like to know the options/recomendations for creating Products/SWCVs and BusinessSystems.
    e.g.
    should we create each product for each application (for each sap and nonsap applications)?
    should we create one product for nonSAP applications, and one product for SAP applications?
    should we create one product for entire project, and a SWCV for each application (for each nonsap and sap application)?
    or can we represent/separate the applications by namespaces within same SWCV?
    should we create a separate SWCV for storing the objects that use multiple applications design objects e.g. mapping objects, process integration scenarios etc.?
    for each SAP module like HCM, FICO, should there be different namespaces when integrating with other applications.
    do we need to use business system groups for organising sap and nonsap business systems.?
    these mostly need to be designed/decided in the beginning of the project, as it forms the base of the rest of the deveopment.
    Can you pls share your comments and recommendations for above kind of queries.
    thanks in advance,
    Madhu.

    should we create one product for entire project, and a SWCV for each application (for each nonsap and sap application)?
    I am not sure what is meant by a Product! The creation of a SWCV should be related to a business unit that your organization has (at least we follow this practice). Suppose your organization has three Business Units (Sales, Manufacturing, Maintenance) then in such a case you can create three different SWCV with each containing BU-specific development. So all the development related to Sales will come under the SWCV for sales and so on.
    or can we represent/separate the applications by namespaces within same SWCV?
    answer as implied above.....not a good practice to develop namespaces based on the applications...namespaces should be related to the interface name...say you have an interface for salesorder transfer from system X to Y then your namespace should reflect salesorder in it and if you have multiple salesorder processes then you can append some terms to the NS which will help differentiate between the individual SO processes.
    should we create a separate SWCV for storing the objects that use multiple applications design
    objects e.g. mapping objects, process integration scenarios etc.?
    If you mean object reuse...then you can define a common namespace within a SWCV and develop the commonly used objects in it and then refer these in other flows (namespaces) defined within the SWCV.
    for each SAP module like HCM, FICO, should there be different namespaces when integrating with other applications.
    Suppose you have 5 different Integrations involving HCM ---> FICO (or other combination) then your namespaces should be in such a way that there is a clear differentiation between the 5 integrations
    For more information on how your objects should be defined you can go through this document:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/90b213c2-d311-2a10-89bf-956dbb63aa7f
    All the above mentioned points are my personal...the implemenation should be in accordance to the best practices suggested by SAP and secondly according to the organizational needs.
    Regards,
    Abhishek.

  • Netbook recommendations for the US market?

    Hi there!
    I'm about to buy a netbook. Since my parents will be flying to the US soon, I thought about asking them to buy it there and bring it back to Germany, as this seems a lot cheaper (is it?). So I'm wondering if there's any netbooks on the US market that someone can recommend. I'm not looking for anything special: anti-glare display (roughly 10"), hopefully linux compatibility. So can anyone suggest a model?
    Best wishes,
    Rufus

    RufusD wrote:What do you guys know about these anti-glare stickers one can glue to the screen? Are they any good?
    No.
    Anything you put in the optical path is going to reduce the luminance by 10 or 20 cd/m^2 [nit]  Furthermore, you are almost guaranteed to entrap dust or air bubbles in the optical path
    hayer wrote:Why oh why do they keep making glossy screens.
    Because it is all about contrast.  That is the ratio of light emitted by the display divided by the light reflected off of the display.
    Matte finishes have a very high diffuse reflection -- reflected light  bounces off in all directions -- some of which is guaranteed to be in the direction of your eye.   For each light source behind you, some portion of that light is reflected in the direction of your eye.  As such, bright environments uniformly lower the display contrast.
    Gloss finishes have a very low diffuse reflection, but a very high specular reflection (like a mirror).  Light sources bounce off of the display with almost the same brightness they hit it.  As long as the reflection does not hit you in the eye, the display contrast is not impacted.  If on the other hand, the reflection bounces back at the user-- well the contrast at that point is trashed.  Glossy displays have a non-uniform reduction in contrast.
    The point is a matte display works great if there are many low intensity light sources behind you.  The contrast is uniform and there is a uniform reduction in that contrast.  If, however, there is a single bright source behind you (the sun), a matte display will be reduced to a very low contrast.  In this situation, a glossy display can be positioned such that the specular reflection from the sun does not reflect back toward the user. 
    In an office environment, LCDs generally provide contrast ratios on the order of several hundred to one.  LCDs are okay down to about 10:1.  Below 3:1 they are useless (IMHO)  A matte display with a luminance of 300nit will, outdoors on a sunny, sub-tropical day, quickly fall below 5:1 no matter how it is oriented.  With Gloss, you at least can try different angles.

  • SAP Parameter Recommended  note for SAP Content Server 6.40

    Hi..
    we are planning to installation of SAP Content Server 6.40 with SAP MAXDB 7.6 on Solaris, Kindly provide what are the recommendation need to perform before installation  of Production system.
    Like IO Buffer Cache[MB] ,  Number of Sessions  or any SAP Parameter Recommended  note for SAP Content Server 6.40 with SAP MAXDB 7.6.
    Regards,
    Panu

    Hello,
    Did you already check the preparations and parameters in the SAP CS 6.40 installation guide ?
    You can find topics in the Installation Guide like :
      - Planning and Sizing of the Database Instance
      - Preparations
    For more in depth tuning also have a look at the following document :
    Operational Guide - SAP Content Server
    This document contains the complete list of Content Server parameters.
    Success.
    Wim

  • Is RAID 5 not recommended for Oracle database ?

    - I am planning to install Oracle database
    Here are the specs:
    - Dell PowerEdge T610 server
    - Windows 2008 R2 (64-bit )
    - 24 GB memory
    - Oracle 10g database (64-bit database)
    - Our database is for an ERP based client server application
    - Will have around 35-45 users
    - Will have around 150-200 transaction per day ( 8AM-5PM) only database
    - database would be around 10-15GB (data files)
    Now I am planning to buy the hardware server, I heard that RAID 5 is not recommended for Oracle database.
    Is it true ?
    What do you recommended for Oracle database ? RAID 5 (or) RAID 1

    johnpau2013 wrote:
    I heard that RAID 5 is not recommended for Oracle database.
    Is it true ?Kind off... as Oracle (as per the RAC Starter Kits for example) explicitly recommends SAME - as in Stripe And Mirror Everything.
    This is a combination of RAID1 and RAID0.
    However, you will also get whitepapers from storage vendors like EMC in partnership with Oracle, that explains how to configure and use RAID5 storage and how effective this is. And as some will tell you, RAID5 can work just fine with Oracle... and some others will grimace and tell you how bad an idea that is from personal experiences.
    In a nutshell - RAID5 requires a parity calculation on each write. That is a very expensive overhead if that calculation impacts the fwrite() (file write) command of a process writing data to disk. It significantly slows down the write operation.
    However, if that parity calculation is done asynchronously and does not effect the elapsed time for a fwrite(), then a fwrite() to RAID5 is as fast or as slow as it would be to RAID10 for example.
    Unsure how many modern storage system supports off-loading the parity calculation and not impacting the write I/O call. This can be done using ASICs (Application Specific Integrated Circuits), specialised s/w on the storage server, etc.
    Whatever you put in place - test the RAID config thoroughly up front and ensure that performance is up to specs. There's a utility called fio (<i>Flexible I./O</i>) written by a Linux kernel hacker that works for Oracle. It is an excellent utility to use to generate and test I/O - but it is Linux and Unix based.
    Any specific reason using Windows? Linux is by far a more predominant 64bit operating system than Windows, and has significant higher market share in Oracle RDBMS than Windows. Given the fact that over 93% of the 500 fastest computer systems on this planet runs Linux, I'm always kind of amazed that some would still use Windows as a server-side o/s for Oracle.

  • How to display the field name in the tabulated view for a content query web part

    I have added a content query web part changed the web part file to include custom columns imported and reffered itemstyle.xsl
    to include the tabulated view for the content query.
    However the way it is displayed is such that only the content is displayed.
    As i am using a tabular view wto display the data, i want to display their field names as well.

    Hi  ,
    According to your description, my understanding is that you need to display the field in the tabulated view for a content query web part.
    For your issue, please refer to the code as below:
    <xsl:template name="VendorCustomStyle" match="Row[@Style='VendorCustomStyle']" mode="itemstyle">
    <html>
    <table width="100%">
    <xsl:if test="count(preceding-sibling::*)=0">
    <tr>
    <td width="8%" valign="top"><div class="item"><b>Vendor ID</b></div></td>
    <td width="12%" valign="top"><div class="item"><b>Vendor Name</b></div></td>
    <td width="50%" valign="top"><div class="item"><b>Vendor Description</b></div></td>
    <td width="10%" valign="top"><div class="item"><b>Vendor Country</b></div></td>
    <td width="10%" valign="top"><div class="item"><b>Vendor Date</b></div></td>
    <td width="10%" valign="top"><div class="item"><b>Created By</b></div></td>
    </tr>
    </xsl:if>
    <tr>
    <td width="8%" valign="top"><div class="item"><xsl:value-of select="@VendorID" /></div></td>
    <td width="12%" valign="top"><div class="item"><xsl:value-of select="@Title" /></div></td>
    <td width="50%" valign="top"><div class="item"><xsl:value-of select="@Vendor_x005F_x0020_Description" disable-output-escaping="yes" /></div></td>
    <td width="10%" valign="top"><div class="item"><xsl:value-of select="@Vendor_x005F_x0020_Country" /></div></td>
    <td width="10%" valign="top"><div class="item"><xsl:value-of select="@Vendor_x005F_x0020_Date" /></div></td>
    <td width="10%" valign="top"><div class="item"><xsl:value-of select="@Author" /></div></td>
    </tr>
    </table>
    </html>
    </xsl:template>
    For more information, please have a look at the blog:
    http://www.codeproject.com/Articles/756834/Customizing-the-Content-Query-Web-Part-and-Item-St
    http://msdn.microsoft.com/en-us/library/ms497457.aspx
    http://clarksteveb.hubpages.com/hub/Customized-Content-Query-Web-Part-CQWP-in-SharePoint-2007-with-results-Tabbed-Grouped-and-in-an-HTML-Table
    http://blog.sharepointexperience.com/customitemstyle/
    Best Regards,
    Eric
    Eric Tao
    TechNet Community Support

  • A workflow for a new MM vendor invoice showing the error.

    We have detected the following strange behaviour in our workflow runtime environment.
    A workflow for a new MM vendor invoice is started as it should.When we see the workflow overview it shows status u201CIn Processu201D. 
    f the workflow shows the status u201CIn Processu201D for an hour or more, a new undesired workflow (same WF-task) is started automatically for the same invoice. Nobody has triggered this new undesired workflow.
    Please let us know the what is the problem
    Thanks
    Sarin23

    Hi,
    Please see the below link and this will help you.
    http://help.sap.com/printdocu/core/print46c/en/data/pdf/BCBMTWFMMM/BCBMTWFMMM.pdf
    http://help.sap.com/saphelp_46c/helpdata/EN/c5/e4a930453d11d189430000e829fbbd/content.htm
    Anil

  • Recommendation for maintaining pricing in CRM when R/3 is also in landscape

    Hi,
    in out context we have already SAP R/3 in the landscape, all the pricing in done in R/3. lot of custom pricing routines are also written in r/3.
    now we in the process of implementing CRM and plan to do Quotations in CRM and replicate in SAP R/3 . what is the recommendation for such a scenario.
    do we replicate all pricing conditions/procedures in CRM? do we need to re-do all procedures in IPC?
    can we re-determine pricing in R/3 so that we can avoid such work in CRM? we are using configurable products in R/3.
    please share your thoughts/best practices and any document will be of help.
    thanks
    RH

    Hi Swaroop,
               The Assignment of Buisness Partner classification from CRM to R3 is purely based on Classification
    so your initial Entry
    I chose B as classification & Z001 as an account group which is a copy of 0001. System takes it.
    was taken by the system
    now when you tried to make another Entry by the same Classification
    chose B again & this time Z002 acc gp (copy of 0002), and system does not allow?
    it checks for the Existing Classification and so does not allow the same
    Delete the Previous Entry and then Try Entering the New Entry
    The System shall Take it
    Hope it Answered your Queries...
    Revert Back for any Doubts..
    Also Chk the Link for details:http://help.sap.com/saphelp_crm50/helpdata/en/04/4d9ac77b2b11d3b52f006094b9114a/content.htm
    Thanks and Regards,
    RK.

  • Using differnent templates for same content

    Hi all,
    I have an issue where we have a Page Group with lots of pages/sub-pages.
    There are three different User Groups, Internal, Customer and Supplier.
    I need to display the same content but with different templates (look and feel), one for the Internal, one for the customer and one for the supplier.
    Can this be done using Oracle Portal 10.1.14? If so, how?
    Many thanks.

    If you mean an HTML templates, you could use a CSS-based design and include logic in "oracle" tags in your HTML template that chooses the CSS based on group membership. You could use the wwsec_api.is_user_in_group function to check for group membership and display the appropriate CSS as a result.
    If you're talking about three different portal templates, I'd recommend publishing your content as page portlets, and putting those portlets on three different pages, each having a different view privilege.
    Or perhaps some combination of techniques would work.
    I'm not sure what that has to do with having lots of pages/sub-pages". Are you trying to reuse more content and lower the number of pages?
    HTH,
    -John

  • Using Digital Projectors for Widescreen content

    Hi, I have a 15" macBook pro
    and was wondering what digital projectors
    are recommended for displaying widescreen
    content (as in HD content aroudn 1280 x 720) ?
    thank you

    Hi,
    I would like to use an ipod nano for slideshows with a digital projector...wondering if you ever got an answer to this question? I would like to know if you can hook one up to a digital projector before i purchase one, and if so, which cables do i need?
    Thanks,
    Heather

  • XML Parser Message: Element series is not valid for the content model

    Hello,
    I work with FrameMaker 8 and DITA.
    I change the element prodinfo in the topic.edd from:
    General rule: (prodname), (vrmlist), (brand | series | platform | prognum | featnum | component)*
    to:
    General rule: (brand | series | platform | component)*
    When I import the element definition to the template everything is okay.
    When I insert the elements metadata, prodinfo, brand, series, platform and component into a topic I get the XML Parser Message that the element brand is not valid for the content model (prodname,vrmlist, ((brand|series|platform|prognum|featnum|component))*).
    When I delete the element brand in the topic I get the XML Parser Message that the element series is not valid for the content model (prodname,vrmlist, ((brand|series|platform|prognum|featnum|component))*).
    I change the element prodinfo in the topic.edd to:
    General rule: (brand)?, (series)?, (platform)?, (component)?
    ...and get the same Parser Message.
    I do not understand that. Is not it allowed to change the EDD this way without changing the DTD?
    With kind regards
    Nina

    Hi Nina...<br /><br />In general, the EDD and DTD need to sync up. You can remove elements from an EDD element definition's general rule, as long as the resulting elements are still valid to the DTD. But if changing a general rule creates an invalid structure, you'll need to also change the DTD to allow the revised structure.<br /><br />With DITA, it is common to remove inline elements from block-level elements. For example, you might want to remove the <msgblock>, <msgnum>, and <msgph> elements from the general rule of the <p> element .. this can be done easily in the EDD and the resulting structure remains valid with the DTD.<br /><br />However, what you're doing leaves the <brand> element as a child of <prodinfo> .. which is invalid. You'll get these errors when saving a file, since this is when the file is validated against the DTD.<br /><br />I do not recommend modifying the structure in such a way that requires you to modify the DTD. If you really need to do this, then you should consider making a specialization to support your revised model.<br /><br />I hope this helps.<br /><br />Cheers and Happy New Year!<br /><br />...scott

  • Which jre & tomcat version do you recommend for this hardware setup?

    I have to install a SINGLE JSP application onto an Intel pentium3 windows XP box with 128Mb of RAM clocked at 733Mhz
    1. Is this setup sufficient for running tomcat?
    2. Which version of tomcat and which JRE do you recommend for this setup?
    3. Would I get better performance running Jetty, and if so which version?

    The issue of changing native folder hierarchy and/or naming conventions has little to do with solid state media and everything to do with the container and header data that links the audio/video streams with the metadata and thus allows the content or "media essence" to be properly decoded and played.  This is the very reason to use a tool like Prelude or Premiere's Media Browser or even Bridge or Lightroom or a third party logging tool like metafuze or shotputpro, or dedicated camera format software such as sony xdcam clipbrowser or panasonic p2 browser, redcine x, ect, as these tools all are designed to parse the captured A/V data and make properly syntaxed edits to the metadata either embedded directly into the media container or in Prelude and Premiere's case can also be written back to a sidecar or associated XMP file, which is a flavor of XML that contains all the original captured metadata plus any changes and/or additions made via these tools.
    The dangers come into play when changes to the native file naming convention or folder hierarchy are done directly to the media content via finder on mac or explorer on windows, as changing at that level can break the relationship between the metadata stream and the media essence since the changes are not being written to the metadata or content headers or being handled in a manner referred to as transmuxing..
    Prelude & Premiere's Media Browser both allow for looking into the native files non-destructively and safely modifying either the metadata linked to the native media or also allows for a transcode to be done which will then copy or recreate a new media file in whatever format selected along with a new metadata stream.
    Prelude is a nice utility and Premiere's Media Browser an amazing tool for organizing, modifying and selectively ingesting media and or making quick selects and assemblies.

  • Recommendations for Analog to Digital conversion

    I am seeking the help of all the Pros out there. I am moving my extensive analog music collection (LP/Cassette) to the digital world. Are there any recommendations for creating a CD-quality sound/volume level. I am currently using the standards (click/pop/hiss reduction, compressor and/or hard limiting). I am recording at 44.1/16 Bit and saving files in mp3 PRO-320 kbps for sake of storage. I welcome any assistance in regard to Plug-ins, equalization or other suggestions.
    Thank you in advance
    Paddy 41
    Pentium 4/3.2 GHz, 2.0 MB Ram, Soundblaster X-Fi Elite Pro, Windows XP PRO SP3, Adobe Audition 3.0, Sony PS-LX250H Turntable, Denon DRM-600 Cassette.

    >I am recording at 44.1/16 Bit and saving files in mp3 PRO-320 kbps for sake of storage. I welcome any assistance in regard to Plug-ins, equalization or other suggestions.
    You shouldn't save files as MP3s until all the processing you want to do has been done - this is a lossy format, and every time you open a file saved like this it gets re-decoded, and then re-encoded when you do a save. And the quality degeneration is progressive. And unfortunately, recording your files as 16-bit is also not going to be quite the thing to do if you are going to do any sort of amplitude processing at all, either, especially if you are recording at a lower-than-optimum amplitude, which is usually the case. 44.1k is fine, though!
    So, to get over the potential problems, the thing to do is to digitise your files as 44.1k 32-bit floating point, and until all processing is done, store them as Windows PCM wav files in this format, because that is uncompressed, which is what you need.
    If you want to reduce noise on cassette recordings, it's worth using a fairly high FFT setting, even though it takes longer to do the processing. Since most of the noise is not LF, this tends to work better, although there is also a case for doing the NR twice, once with a lower FFT setting and once with a higher one, but not trying to take out too much in one go.
    As for click reduction - well, you have to experiment. But letting the software determine all of the levels tends to give you a pretty good starting point. Another trick that's sometimes worth it is to transform your files temporarily to M-S stereo instead of L-R, and treat each channel separately - you generally get different levels of clicks in each, usually more in the S signal, whilst the M cancels out quite a few. And in general, the less clicks you can get away with processing, the better.
    Hard limiting? Never used it on a conversion like that at all, and wouldn't even consider it - transients are already likely to be distorted, and hard limiting them further seems like a
    very strange thing to do...
    Other tools that are useful, especially on cassettes are iZotope's Ozone (some of which is already in Audition) and HarBal. You can restore usually distorted frequency responses with the latter, and get a good average value automatically and easily. And judicious use of Ozone's enhancer can make them seem a lot less like the results came from a cassette...
    One of the useful things you can do with Ozone is to selectively widen parts of the spectrum. Since most records of music tend (for reasons of tracking) to have had the bass forced to be virtually mono, with subsequent similar consequences for other instruments with substantial LF content, it's worth expanding just about everything under about 250Hz, possibly by more than you might think.
    Generally, 'normal' EQ doesn't have a lot of use when processing vinyl, if it's relatively recent. And that generally means anything from the 60's onwards. If you think things are seriously wrong, the chances are that it's your monitors and environment that are misleading you. Just bear in mind that when this stuff was produced, it was all monitored in a professional environment - more so than it might be today, even.
    No doubt there will be a few other responses - I don't think that the foregoing is anything like complete, but is at least a start.

  • I need a recommendation for SAN (Actually NAS) ethernet switches.

      Hi, we have one Dell blade chassis with 8 servers ----  connected to stacked switches (3750) with EtherChannels ----  to NetApp (Controller 1 and 2 accordingly). Unfortunately, Dell blade servers only supports 1Gbps for port and NetApp also supports 1Gbps for each port (ports are all ethernet, not FC) 
    Questions
    1) What kind of switch do you recommend? I know that it is not popular to use ethernet in SAN  I just recognized that we have NAS, what do you recommend switches for NAS?
    not SAN, so any MDS 9000 series don't work because it supports limited ethernet ports (many FC ports). I need many ethernet ports at least 48 ports from each switch. I know Nexus is one of candidates, but want to double check. 
    2) Most of vendors ask what kind of connection I have. I said usually just ethernet, not FC, not FCoE. Is it right?    I believe that since I found NAS, it is "Ethernet"
    3) The reason that I am looking for different switches is there are huge packet dropped (outbound) in switches (outgoing toward both dell servers and netapp; incoming is ok) We recently moved PtoV so, it is possible that traffic volume was increased. But output doesn't give me lot of information like below. Do you have any recommendation for further t-shoot? 
     (connected interface to Dell)
    GigabitEthernet2/0/34 is up, line protocol is up (connected) 
      Hardware is Gigabit Ethernet, address is 580a.20f1.db22 (bia 580a.20f1.db22)
      Description: Chassis8
      MTU 9000 bytes, BW 1000000 Kbit/sec, DLY 10 usec, 
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA, loopback not set
      Keepalive set (10 sec)
      Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
      input flow-control is off, output flow-control is unsupported 
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input never, output 00:00:57, output hang never
      Last clearing of "show interface" counters never
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 32453861
      Queueing strategy: fifo
      Output queue: 0/40 (size/max)
      5 minute input rate 3853000 bits/sec, 364 packets/sec
      5 minute output rate 2275000 bits/sec, 368 packets/sec
         15864561667 packets input, 16858567695886 bytes, 0 no buffer
         Received 4347 broadcasts (6 multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 watchdog, 6 multicast, 0 pause input
         0 input packets with dribble condition detected
         47326292220 packets output, 62942914503089 bytes, 0 underruns
         0 output errors, 0 collisions, 1 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier, 0 pause output
         0 output buffer failures
    (connected to NetApp)
    Port-channel2 is up, line protocol is up (connected) 
      Hardware is EtherChannel, address is 2c3e.cfaa.af03 (bia 2c3e.cfaa.af03)
      MTU 9000 bytes, BW 3000000 Kbit/sec, DLY 10 usec, 
         reliability 255/255, txload 2/255, rxload 1/255
      Encapsulation ARPA, loopback not set
      Keepalive set (10 sec)
      Full-duplex, 1000Mb/s, link type is auto, media type is unknown
      input flow-control is off, output flow-control is unsupported 
      Members in this channel: Gi1/0/3 Gi2/0/1 Gi2/0/2 
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input never, output 00:00:01, output hang never
      Last clearing of "show interface" counters never
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3316452
      Queueing strategy: fifo
      Output queue: 0/40 (size/max)
      5 minute input rate 6693000 bits/sec, 2048 packets/sec
      5 minute output rate 30028000 bits/sec, 2773 packets/sec
         107334585357 packets input, 140120529103340 bytes, 0 no buffer
         Received 5609191 broadcasts (407218 multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 watchdog, 407218 multicast, 0 pause input
         0 input packets with dribble condition detected
         38961062194 packets output, 40437523739199 bytes, 0 underruns
         0 output errors, 0 collisions, 2 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier, 0 pause output
         0 output buffer failures, 0 output buffers swapped out
    Thanks. 

      Hi, we have one Dell blade chassis with 8 servers ----  connected to stacked switches (3750) with EtherChannels ----  to NetApp (Controller 1 and 2 accordingly). Unfortunately, Dell blade servers only supports 1Gbps for port and NetApp also supports 1Gbps for each port (ports are all ethernet, not FC) 
    Questions
    1) What kind of switch do you recommend? I know that it is not popular to use ethernet in SAN  I just recognized that we have NAS, what do you recommend switches for NAS?
    not SAN, so any MDS 9000 series don't work because it supports limited ethernet ports (many FC ports). I need many ethernet ports at least 48 ports from each switch. I know Nexus is one of candidates, but want to double check. 
    2) Most of vendors ask what kind of connection I have. I said usually just ethernet, not FC, not FCoE. Is it right?    I believe that since I found NAS, it is "Ethernet"
    3) The reason that I am looking for different switches is there are huge packet dropped (outbound) in switches (outgoing toward both dell servers and netapp; incoming is ok) We recently moved PtoV so, it is possible that traffic volume was increased. But output doesn't give me lot of information like below. Do you have any recommendation for further t-shoot? 
     (connected interface to Dell)
    GigabitEthernet2/0/34 is up, line protocol is up (connected) 
      Hardware is Gigabit Ethernet, address is 580a.20f1.db22 (bia 580a.20f1.db22)
      Description: Chassis8
      MTU 9000 bytes, BW 1000000 Kbit/sec, DLY 10 usec, 
         reliability 255/255, txload 1/255, rxload 1/255
      Encapsulation ARPA, loopback not set
      Keepalive set (10 sec)
      Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
      input flow-control is off, output flow-control is unsupported 
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input never, output 00:00:57, output hang never
      Last clearing of "show interface" counters never
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 32453861
      Queueing strategy: fifo
      Output queue: 0/40 (size/max)
      5 minute input rate 3853000 bits/sec, 364 packets/sec
      5 minute output rate 2275000 bits/sec, 368 packets/sec
         15864561667 packets input, 16858567695886 bytes, 0 no buffer
         Received 4347 broadcasts (6 multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 watchdog, 6 multicast, 0 pause input
         0 input packets with dribble condition detected
         47326292220 packets output, 62942914503089 bytes, 0 underruns
         0 output errors, 0 collisions, 1 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier, 0 pause output
         0 output buffer failures
    (connected to NetApp)
    Port-channel2 is up, line protocol is up (connected) 
      Hardware is EtherChannel, address is 2c3e.cfaa.af03 (bia 2c3e.cfaa.af03)
      MTU 9000 bytes, BW 3000000 Kbit/sec, DLY 10 usec, 
         reliability 255/255, txload 2/255, rxload 1/255
      Encapsulation ARPA, loopback not set
      Keepalive set (10 sec)
      Full-duplex, 1000Mb/s, link type is auto, media type is unknown
      input flow-control is off, output flow-control is unsupported 
      Members in this channel: Gi1/0/3 Gi2/0/1 Gi2/0/2 
      ARP type: ARPA, ARP Timeout 04:00:00
      Last input never, output 00:00:01, output hang never
      Last clearing of "show interface" counters never
      Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3316452
      Queueing strategy: fifo
      Output queue: 0/40 (size/max)
      5 minute input rate 6693000 bits/sec, 2048 packets/sec
      5 minute output rate 30028000 bits/sec, 2773 packets/sec
         107334585357 packets input, 140120529103340 bytes, 0 no buffer
         Received 5609191 broadcasts (407218 multicasts)
         0 runts, 0 giants, 0 throttles
         0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
         0 watchdog, 407218 multicast, 0 pause input
         0 input packets with dribble condition detected
         38961062194 packets output, 40437523739199 bytes, 0 underruns
         0 output errors, 0 collisions, 2 interface resets
         0 unknown protocol drops
         0 babbles, 0 late collision, 0 deferred
         0 lost carrier, 0 no carrier, 0 pause output
         0 output buffer failures, 0 output buffers swapped out
    Thanks. 

  • Best Practises for using Content Types

    We had third party vendor who migrated and re structured contents and sites in Sharepoint 2010. I noticed one unusual thing : In most cases they new separate site content types for every library in a site i.e. even if two libraries contain same set of metadata
    columns they created separate site content types by duplicating it from the first one and gave it unique name and used in second library.
    My point of view for content type is that they are used for re usability, i.e. if another library is needing same set of metadata columns then I would just reuse existing content type rather than creating another content type with different name by inheriting
    it from the first one and having same set of columns.
    When I asked vendor reason for this approach (for every library they created new content types and for libraries needing same set of meta data columns they just inherited from a custom site content type and created another duplicate one with same set of
    meta data columns and gave it different name in most cases name of library ) they said they did that to classify documents which I did not agree with because by creating two document libraries classification is already done.
    I need some expert advice on this, I will really appreciate: I understand content types are useful and they provide re usability but,
     A) Do we need to create new site content types whenever we create new library ? (Even though we are not going to re use them)
    B) What is best practice : if few libraries are needing same set of metadata columns
    1) Create site a content type and reuse it in those libraries ? or
    2) Create a site content type and create new content types by inheriting from site content type created at first and just give them different name even though all of them are having same set of columns  ?
    I need expert advice on this but following is my own opinion on this
    I do not think point A) is a good practice and should not be used, we should create site content type only when we think it will be re used and we do not need to create site content type every time we create document library. Also I do not think point 2)
    of B is a good practice as well
    Dhaval Raval

    It depends on the nature of the content types and the libraries. If the document types really are shared between document libraries then use the same ones. If the content types are distinct and non overlapping items that have different processes, rules or
    uses then breaking them out into a separate content type is the way forward.
    As an example for sharing content types: Teams A and B have different document libraries. Both fill in purchase orders, although they work on different projects. In this case they use the same form and sharing a content type is the no question approach.
    As an example for different content types: A company has two arms, a consultancy where they send people out to client sites and a manufacturing team who build hardware. Both need to fill in timesheets but whilst the metadata fields on both are the same the
    forms are different and/or are processed in a different manner.
    You can make a case either way, i prefer to keep the content types simple and only expand out when there's a proven need and a user base with experience with them. It means that if you wanted to subdivide later you'd have more of a headache but that's a
    risk I generally think works out.

Maybe you are looking for

  • I need help getting my Intuos5 to work with Flash CS6

    Can someone give me some help with this? I can't find a single thing on Google that helps me. I installed Adobe Flash CS6 recently and I have been using my Intuos5 Large for a long time now with Photoshop CS6. The thing is, with Photoshop, it took fo

  • Does running scanpst ever fully correct or fix the pst? Ran it 416 times so far trying to get it to fully clean the file.

    Does running scanpst ever fully correct or fix the pst? Ran it 416 times so far trying to get it to fully clean the file. I've ran this scanpst within Win7 outlook 2010 for 3 days. The scanpst began taking about 4 minutes to finish and still showed t

  • Calendar Server Server Check

    Good Day, I have installed Sun Java System Calendar Server on my Machine, however, after that I tried to start the server using via 'Start Server' icon. I don't know what's next after that step. 1. May I know what URL do I have type in the browser to

  • Feedback. Good but n so much

    hallo,   i recently bought a Playbook and I updated the device to the latest OS. I would like to share my impression and feedback.  First the hardware looks pretty good: right size, right handling, the OS ist really smooth. The OS is really good as w

  • How do you disable postscript pass-through using Mac OSX 10.6

         My Phaser 4600DN prints quite slowly when printing a PDF, a 28 page documents takes nearly 7 minutes to print. On Windows 7 PC the same document takes 6 minutes to print. If I disable Postscript Pass-Through option on the driver the 28 page docu