我發現使用clang++ -O3 -std=c++11 -stdlib=libc++
的結果相當不同。
首先我提取用逗號沒有換行符分離成巨大的字符串〜470K字的文本文件中,像這樣:
path const inputPath("input.txt");
filebuf buf;
buf.open(inputPath.string(),ios::in);
if (!buf.is_open())
return cerr << "can't open" << endl, 1;
string str(filesystem::file_size(inputPath),'\0');
buf.sgetn(&str[0], str.size());
buf.close();
然後我跑的各種定時測試結果存儲到清除預尺寸矢量運行之間,例如,
void vectorStorage(string const& str)
{
static size_t const expectedSize = 471785;
vector<string> contents;
contents.reserve(expectedSize+1);
...
{
timed _("split is_any_of");
split(contents, str, is_any_of(","));
}
if (expectedSize != contents.size()) throw runtime_error("bad size");
contents.clear();
...
}
作爲參考,定時器只是這樣的:
struct timed
{
~timed()
{
auto duration = chrono::duration_cast<chrono::duration<double, ratio<1,1000>>>(chrono::high_resolution_clock::now() - start_);
cout << setw(40) << right << name_ << ": " << duration.count() << " ms" << endl;
}
timed(std::string name="") :
name_(name)
{}
chrono::high_resolution_clock::time_point const start_ = chrono::high_resolution_clock::now();
string const name_;
};
我也計時一次迭代(無向量)。下面是結果:
Vector:
hand-coded: 54.8777 ms
split is_any_of: 67.7232 ms
split is_from_range: 49.0215 ms
tokenizer: 119.37 ms
One iteration:
tokenizer: 97.2867 ms
split iterator: 26.5444 ms
split iterator back_inserter: 57.7194 ms
split iterator char copy: 34.8381 ms
記號賦予這麼多慢比split
,一個迭代數字甚至還不包括字符串拷貝:
{
string word;
word.reserve(128);
timed _("tokenizer");
boost::char_separator<char> sep(",");
boost::tokenizer<boost::char_separator<char> > tokens(str, sep);
for (auto range : tokens)
{}
}
{
string word;
timed _("split iterator");
for (auto it = make_split_iterator(str, token_finder(is_from_range(',', ',')));
it != decltype(it)(); ++it)
{
word = move(copy_range<string>(*it));
}
}
毫不含糊的結論:使用split
。
簡介它,你告訴我們。 –
第二個看起來好像沒有做任何內存分配,所以我猜想這會更快。但只有一種方法可以確定。 –
[Boost.Spirit](http://www.boost.org/libs/spirit/)。[齊](http://www.boost.org/libs/spirit/doc/html/spirit/qi/tutorials /quick_start.html)將大幅超越兩者。 – ildjarn